Artificial intelligence (AI) generated deepfakes are likely to have a "massive" impact on voters in future elections and there isn't much that can be done right now to stop it, according to an AI advisor for the United Nations (UN).
Speaking with Fox News Digital, Neil Sahota said his sources warned the growing use of deepfake advertisements may very well be "the greatest threat to democracy."
"A lot of people—and I think those in the media too, are calling the 2024 election 'the deepfake election' that is probably going to be marred by tons and tons of deepfakes," Sahota said. "Not much can be done right now to stop any of that."
While the UN and various other organizations and corporations are working quickly to roll out software that can detect deepfakes, Sahota noted that common verification tools, such as watermarks, are relatively easy to circumvent in their current iterations.
Furthermore, the chance of successfully detecting AI-generated content varies greatly depending on the medium. For example, deepfake videos often leave several markers for identification.
An analyst can look at the person's body language in the video. They can determine if the audio syncs correctly with the individual's mouth and monitor changes in lighting and shadows and potential artifacts in each still frame. Unfortunately, this analysis takes time and resources in an age where things can go viral overnight.
"If someone releases a very damaging deep fake video two days before the election, that may not be enough time to counteract it and prove it and get people to believe that," Sahota said.
Deepfakes have already had an impact on the political system worldwide. In April, The Republican National Committee (RNC) created the first fully AI-created political ad targeting the Biden administration on China and crime. Sahota said the Democratic National Committee (DNC) refuses to say whether they have made similar AI content.
AI has also impacted the recent elections in Turkey. Sahota said over 150 deepfake videos were captured and debunked on social media.
"People need to have information to be informed voters. If you don't know what to trust, then you have these AI systems that, well, they know you like a best friend and can send you a very specific targeted fake ad. What do you do?" he added.
For years various organizations and individuals have been working to teach AI in psychology, behavioral science and linguistics. These AI systems get to know an individual's opinions, hobbies and interests. Sahota said it even knows what words will sway you, connect with and persuade you.
While many researchers are always looking for the big "homerun" deepfake, such as Volodymyr Zelenskyy telling Ukrainian troops to surrender, bad actors are also "micro-targeting" people to sway certain subsets of the population.
A recent deepfake of Hillary Clinton showed the former presidential candidate saying she liked Florida Governor Ron DeSantis and would endorse him if he ran for president. Sahota said these videos are manipulating people's decisions on a smaller scale that is often overlooked.
Although AI videos are of valid concern, Sahota said the use of psychological AI tools has already been "perfected" in marketing, where people can create a kind of "echo chamber effect.
If a person is subscribed to someone's newsletter or sees something on their feed, the AI algorithm reinforces this over time. This begs the question, is a person choosing to vote for someone because it's their own idea, or has it been swept into their consciousness?
"It's like the movie Inception," Sahota said. "Someone's actually planted that in your mind. And the best way to create buy-in is for you to think it's your own idea. And that's what a lot of these, unfortunately, AI tools are being used for."
Perhaps the most significant concern to the UN is what happens when someone claims they were the victim of a deepfake but are attempting to brush off a legitimate video, picture or audio recording.
"Somebody say something a little bit something off, okay, we can kind of get that. But someone that actually said something now is trying to spit it off as a deepfake. How do you disprove the negative? There's no way to do a real analysis to do that. And even if you do the real analysis, some people will still be suspicious about the results," Sahota said.
According to Sahota, the Federal Election Committee (FEC) is deadlocked on what to do about deepfakes because they are uncertain if it is even their domain to regulate machine content. With misinformation and misleading claims reaching "epic proportions," Sahota said a shift in mentality may be necessary.
"That kind of cultural shift takes time, and it's a big change," he added. "There will be a lot of resistance to that. And unfortunately, that spin game of 'oh, I didn't actually say that.' That, for sure, is going to happen. We've already, well, we've seen like 100-plus years of the spin already in U.S. politics. So, that's the biggest challenge."
For more Culture, Media, Education, Opinion, and channel coverage, visit foxnews.com/media.