Imitated Voices… A New Artificial Intelligence Threat

Over a quarter of imitated voices using what is known as “deepfake” technology have succeeded in deceiving even the most attentive listeners, a study by London College has revealed.

More than 500 people trained to distinguish real voices from imitated ones were only able to detect 73% of the imitated voices during the study, according to researchers at the university.

Deepfake is one of the applications of artificial intelligence used to generate voices, images, and videos that appear to have been created by real individuals.

The study was conducted using both English and Chinese (Mandarin), and the results were similar in both languages. However, the research team found that English speakers were able to distinguish human voices from AI-generated imitations based on breathing patterns. In contrast, Chinese speakers relied on rhythm, speed, and fluency of speech as criteria for distinguishing voices.

Carefully imitated voices are being used to defraud individuals and obtain money from them. Fraudsters claim to be friends or business partners, convincing victims to transfer money to them.

Recent developments in artificial intelligence technology have raised concerns about the potential spread of voice imitation fraud, especially with the high level of accuracy that AI applications can achieve.

The research team at the British university warned that “due to technological advancements, it has become possible to generate a precise copy of anyone’s voice using a short recording of their speech.”

The researchers noted that the study’s results may not accurately reflect real-world scenarios, as participants, even those who were not trained, knew that they were part of an experiment. Therefore, they were more focused on detecting imitated voices.

Currently, most efforts to combat voice imitation rely on using artificial intelligence systems to detect it. These AI systems performed at the same level as the human participants in the university’s experiment. However, they outperformed individuals who did not know they were participating in a voice imitation detection experiment.

The researchers concluded that given the expected significant advancements in voice imitation technologies, more advanced tools are needed to detect it effectively.

Twitter
WhatsApp
Al Jundi

Please use portrait mode to get the best view.