small-logo
Need help now? Call 216.321.7774

With Elections Looming Worldwide, Here’s How to Identify and Investigate AI Audio Deepfakes

By Rowan Philip for NiemanLab

In October 2023, an AI-synthesized impersonation of the voice of an opposition leader helped swing the election in Slovakia to a pro-Russia candidate. Another AI audio fake was layered onto a real video clip of a candidate in Pakistan, supposedly calling on voters to boycott the general election in February 2024. Ahead of the Bangladeshi elections in January, several fakes created with inexpensive, commercial AI generators gained voter traction with smears of rival candidates to the incumbent prime minister. And, in the U.S., an audio clip masquerading as the voice of President Joe Biden urged voters not to vote in one key state’s primary election.

Experts agree that the historic election year of 2024 is set to be the year of AI-driven deepfakes, with potentially disastrous consequences for at-risk democracies. Recent research suggests that, in general, about half of the public can’t tell the difference between real and AI-generated imagery, and that voters cannot reliably detect speech deepfakes — and technology has only improved since then. Deepfakes range from subtle image changes using synthetic media and voice cloning of digital recordings to hired digital avatars and sophisticated “face-swaps” that use customized tools. (The overwhelming majority of deepfake traffic on the internet is driven by misogyny and personal vindictiveness: to humiliate individual women with fake sexualized imagery — but this tactic is also increasingly being used to attack women journalists.)

Why AI audio fakes could pose the chief threat this election cycle

Media manipulation investigators told GIJN that fake AI-generated audio simulations — in which a real voice is cloned by a machine learning tool to state a fake message — could emerge as an even bigger threat to elections in 2024 and 2025 than fabricated videos. One reason is that, like so-called cheapfakes, audio deepfakes are easier and cheaper to produce. (Cheapfakes have already been widely used in election disinformation, and involve video purportedly from one place that was actually from another, and where short audio clips are crudely spliced into videos, or the closed captions blatantly edited.) Another advantage they offer bad actors is they can be used in automated robocalls to target (especially) older, highly active voters with misinformation. And tracing the origin of robocalls remains a global blind spot for investigative reporters. (The overwhelming majority of deepfake traffic on the internet is driven by misogyny and personal vindictiveness: to humiliate individual women with fake sexualized imagery — but this tactic is also increasingly being used to attack women journalists.)

For more, click here.

Shutterstock – 1430571869

Contact Us

Your name Organization name Describe your situation Your phone number Your email address
Leave this as it is