Published on
ADVEReadNOWISEMENT
Artificial intelligence (AI) will be needed to fight back against realistic AI-generated deepfakes, experts say.
The World Intellectual Property Organisation (WIPO) defines a deepfake as an AI technique that synthesises media by either superimposing human features on another body or manipulating sounds to generate a realistic video.
This year, high-profile deepfake scams have targeted US Secretary of State Marco Rubio, Italian defense minister Guido Crosetto, and several celebrities, including Taylor Swift and Joe Rogan, whose voices were used to promote a scam that promised people government funds.
Deepfakes were created every five minutes in 2024, according to a recent report from think tank Entrust Cybersecurity Institute.
What impacts do deepfakes have?
Deepfakes can have serious consequences, like the disclosure of sensitive information with government officials who sound like Rubio or Crosetto.
“You’re either trying to extract sensitive secrets or competitive information or you’re going after access, to an email server or other sensitive network,” Kinny Chan, CEO of the cybersecurity firm QiD, said of the possible motivations.
Synthetic media can also aim to alter behaviour, like a scam that used the voice of then-US President Joe Biden to convince voters not to participate in their state’s elections last year.
“While deepfakes have applications in entertainment and creativity, their potential for spreading fake news, creating non-consensual content and undermining trust in digital media is problematic,” the European Parliament wrote in a research briefing.
The European Parliament predicted that 8 million deepfakes will be shared throughout the European Union this year, up from 500,000 in 2023.
What are some ways AI is fighting back?
AI tools can be trained through binary classification so they can classify data being fed into them as being real or fake.
For example, researchers at the University of Luxembourg said they presented AI with a series of images with either a real or a fake tag on them so that the model gradually learned to recognise patterns in fake images.
“Our research found that … we could focus on teaching them to look for real data only,” researcher Enjie Ghorbel said. “If the data examined doesn’t align with the patterns of real data, it means that it’s fake”.
Another solution proposed by Vijay Balasubramaniyan, CEO and founder of the tech firm Pindrop Security, is a system that analyses millions of data points in any person’s speech to quickly identify irregularities.
The system can be used during job interviews or other video conferences to detect if the person is using voice cloning software, for instance.
Someday, deepfakes may go the way of email spam, a technological challenge that once threatened to upend the usefulness of email, said Balasubramaniyan, Pindrop’s CEO.
“You can take the defeatist view and say we’re going to be subservient to disinformation,” he said. “But that’s not going to happen”.
The EU AI Act, which comes into force on August 1, requires that all AI-generated content, including deepfakes, are labelled so that users know when they come across fake content online.