OpenAI, the generative artificial intelligence (AI) giant and publisher of ChatGPT, on Friday introduced an audio cloning tool, the use of which will be restricted to prevent fraud or crimes, such as identity theft.
This AI model, called “Voice Engine,” can reproduce a person's voice from a 15-second audio sample, according to a statement from OpenAI about the results of a small-scale test.
“We recognize that the ability to generate human-like sounds carries serious risks, and is especially important in this election year,” the San Francisco-based company said.
“We are working with US and international partners from government, media, entertainment, education, civil society, and other sectors and taking their feedback into account as we develop the tool.”
In this crucial election year around the world, disinformation researchers fear the misuse of generative AI applications (automated production of text, images, etc.), especially voice cloning tools, which are cheap, easy to use and difficult to trace.
OpenAI confirmed that it had adopted a “cautious and informed approach” before wider distribution of the new tool “due to the potential for misuse of synthetic voices.”
This cautious offer comes after a major political incident, when a consultant working in the presidential campaign of Democratic rival Joe Biden developed an automated program that impersonated the US President, in his re-election campaign.
The voice imitating Joe Biden called on voters to encourage them to abstain from voting in the New Hampshire primary.
Since then, the United States has banned calls that use cloned voices, generated by artificial intelligence, in order to combat political or commercial fraud.
OpenAI explained that the partners testing the “Voice Engine” have agreed to rules that require, among other things, explicit and informed consent from anyone whose voice is duplicated and transparency to listeners: they must be clear that the voices they hear are generated by artificial intelligence.
“We have implemented a range of security measures, including a watermark so we can trace the origin of all sounds generated by the Voice Engine, as well as proactively monitoring its usage,” OpenAI insisted.
Last October, the White House unveiled the rules and principles governing the development of artificial intelligence, including transparency.
Joe Biden was moved by the idea that criminals were using it to trap people by pretending to be family members.