This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
United fakes of America: will synthetic media threaten country’s first ‘AI era’ election?
More than four billion people will vote in over 60 elections across the globe this year. The proliferation of AI-enabled misinformation and disinformation has already disrupted many of the world’s most prominent ballots, including those in Mexico, Indonesia, Chad, and many European countries.
The high-stakes US election is no exception. Deepfakes impersonating both President Joe Biden and White House rival Donald Trump circulated widely on social media early in the campaign. Experts agree that this new technology can misinform the public, erode confidence in public agencies, decrease voter turnout, and even provoke conflicts with nation-state adversaries. But solutions exist.
Disinformation is the deliberate dissemination of false or misleading information intending to deceive, manipulate public opinion, push a particular agenda, or obscure the truth. It is a form of propaganda that can be spread through various channels, including news media, social media platforms, websites, and even individuals.
The goal of disinformation is often to sow confusion, create division, undermine trust in institutions, or advance specific political, ideological, or financial interests.
It differs from misinformation, which is false information spread unintentionally or without realising it is untrue. Disinformation campaigns can have significant consequences, distorting public discourse, influencing elections, and eroding faith in democratic processes. Combating disinformation requires media literacy, fact-checking, and efforts to expose and counter false narratives with accurate, reliable information from trustworthy sources.
Deepfakes are synthetic media created using AI algorithms that can manipulate or generate visual and audio content with a high degree of realism. In elections, deepfakes can be used to create fake videos or audio recordings of candidates saying or doing things they never actually did to sway public opinion or sowing confusion and distrust.
“I’ve been building human-level AI systems for decades, and it’s a little sad to see the technology used to manipulate people instead of help,” said Ben Goertzel, co-founder of Hong Kong firm Hanson Robotics and the designer of Sophia the Robot, and anthropomorphised AI. “With generative AI, creating and distributing disinformation is now trivial.”
Disinformation campaigns can significantly affect public opinion. In the 2016 US presidential election, Russian operatives used social media to spread false information and sow division among the electorate. While the ultimate impact on the election’s outcome is difficult to quantify, the experience underscores the vulnerability of the democratic process to manipulation by malicious actors.
“You want to know why disinformation is such a problem? Because it works,” said former CIA case officer and defense attorney Jack Rice in a recent interview.
Rice spent years working and later teaching political science and observed influence operations in Russia, the Republic of Georgia, Sudan, Uganda, the International Criminal Court in The Hague, and elsewhere. He says that what’s happening in American politics is no surprise.
“People want information that supports their belief systems,” Rice added. “Deepfakes, disinformation, and misinformation are powerful tools used to create schisms in society and manipulate people’s perceptions, particularly in political and legal systems.
“By disseminating false information that supports people’s beliefs, agenda-driven bad actors can undermine the trust in and perceived legitimacy of false claims, leading both citizens and policymakers to make decisions based on misleading or false information that may not be in their best interest but serves the agenda of those spreading the disinformation.”
‘Party’s Fallin’ Down’
To address the growing concern over disinformation and deepfakes in political communication, The Guardian recently reported that in March 2024, Georgia Representative Brad Thomas presented a video to the state’s judiciary committee featuring AI-generated impersonations of prominent state politicians. Ironically, the video, created using commonly available apps, aimed to illustrate the need for legislation to prevent the misuse of AI in politics.
There are already several high-profile examples of deepfakes in American politics and the 2024 US election. For example, major US political parties traded insults through AI-generated songs earlier this year. In March, the Democratic National Committee released an AI-generated parody song called “Party’s Fallin’ Down” in response to Republican National Committee co-chair Lara Trump’s heavily autotuned track “Anything is Possible.” The online feud demonstrated the ability of AI-enabled content to generate misleading narratives.
Both the major party nominees have been targeted by deepfakes. Ahead of the January 2024 New Hampshire primary election, a fake audio recording of President Biden telling voters not to cast their ballots went viral on social media. A political consultant now faces a $6 million fine from the Federal Communications Commission (FCC) and criminal charges for commissioning a robocall that used AI to impersonate President Biden’s voice, discouraging Democrats from voting in the state’s election.
The FCC ruled in February that using AI-generated voices in robocalls is illegal, and New Hampshire Attorney General John Formella hopes the enforcement actions will deter others from interfering with elections using AI. This high-profile case highlights the potential for generative AI to be used for fraud, scams, and manipulation.
‘October surprise’
Not all disinformation uses AI, but most still rely on technology. Video editing tools and social media help cheap fakes — a video or audio clip deceptively edited to distort the original meaning — find an audience online.
For example, in a move that could foreshadow the challenges of the 2024 presidential election, the White House is pushing back against a series of misleading videos that question President Biden’s mental and physical fitness. The videos went viral on social media, painting the president as confused or unaware of his surroundings.
While these videos do not employ advanced AI techniques, experts warn that they can still erode trust among voters and fuel highly polarised partisan attitudes. As the election approaches, concerns are growing about the potential use of AI-generated deepfakes as an “October surprise” tactic, highlighting the need for new rules and regulations to address the use of AI in campaign content.
The recent conviction of former President Trump led to a surge in the circulation of AI-generated images on social media platforms where his supporters are sharing stylized deepfakes portraying him as a victim of a purportedly corrupt system.
These images’ rapid dissemination and viral nature, coupled with false claims from notable figures about the political motivations behind the former president’s legal troubles, create a complex environment for voters seeking reliable information.
The rise of AI-generated deepfakes and disinformation campaigns in elections worldwide should be of great concern to US voters because the same tactics that disrupt global elections are being used in their home territory.
No democracy, including the United States, is immune to the disruptive influence of false information spread through advanced technology. As seen in recent elections in Pakistan, Indonesia, Chad, and Mexico, disinformation campaigns can sway public opinion, erode trust in democratic institutions, and even provoke real-world tensions or violence.
Global threat
During Pakistan’s 2024 general election, narrative attacks and disinformation, including deepfakes and viral social media posts, appeared to influence public perception and voting behaviours, resulting in widespread post-election confusion and ongoing protests.
Accusations of election rigging, internet blackouts, and deepfake videos targeting Imran Khan and his party, PTI, contributed to the tumultuous election period. This election also saw significant use of AI technology, including Khan’s AI-generated victory speech.
Similarly, Indonesia’s election saw the pervasive use of AI tools, amplified by the challenges of combating misinformation and disinformation on social media, where narratives targeting politicians jumped across platforms. The election saw the use of AI to create positive portrayals, like Prabowo Subiant’s rebranding with cartoon “softfakes”, and negative ones, such as deepfakes used to create distrust.
From February to April, pro-Russian social media accounts appeared to target politicians in Chad with campaigns that blended authentic and inauthentic amplification, successfully reaching sympathetic audiences.
The narrative surrounding interim president Mahamat Déby shifted from portraying him as a Western-serving dictator to a populist hero standing up to France, correlating with Chad’s diplomatic realignment towards Russia.
Specific influencers, reportedly linked to Chadian rebel groups, played a crucial role in driving the inauthentic amplification of anti-French government and anti-Déby content, raising the risk of disinformation fuelling real-world tensions or violence around the election.
These pro-Russian narrative tactics demonstrated high agility in evolving portrayals of key politicians to match changing geopolitical winds and on-the-ground developments, revealing these campaigns’ systematic and responsive nature.
In Mexico’s historic election, which elected Claudia Sheinbaum as the first female and Jewish president, deepfakes, disinformation, and identity politics were rampant for months ahead of the vote. Sheinbaum and others faced numerous misleading attacks, including deepfakes and misinformation questioning her legitimacy. Despite these challenges, voter turnout was high, reflecting public engagement.
Combating disinformation
Experts agree that a multi-faceted approach involving policymakers, election officials, tech platforms, media outlets, and the public to protect elections from AI-enabled disinformation.
There are some solutions, said Rice. “Tackling disinformation must be a multifaceted approach. It includes improved public media literacy, a reinvestment in journalism and media, collaboration with tech platforms, new AI tools that spot inauthentic activity, politicians and political parties dedicated to the rule of law, and more.”
To ensure that AI serves democratic values, collaboration among policymakers, ethicists, and civil society groups is essential. Investing in research and developing advanced AI tools to detect deepfakes, synthetic text, and manipulated media is a critical step in maintaining the integrity of information in the digital age.
Governments must also develop and enforce targeted regulations to effectively combat the malicious use of AI in elections. This includes prohibiting deepfakes in political advertising and mandating transparency in using AI-generated content.
Hardening voting systems, voter registration databases, and election reporting mechanisms against cyber threats and disinformation is non-negotiable. Robust cybersecurity measures, audits, and contingency plans must be in place to maintain the integrity and credibility of election results.
Collaborative efforts among election authorities, media outlets, and tech platforms are also essential in identifying, debunking, and labelling false information. By sharing data and best practices across organisations, these efforts can be scaled and coordinated more effectively. Technology solutions, such as bespoke language models designed for context-checking, can play a significant role in this process.
Educating the electorate is also important: empowering citizens with the skills to evaluate information, check sources, and resist manipulation critically is crucial in the fight against disinformation. Public awareness campaigns, educational programs, and journalistic initiatives that promote media literacy and inoculate against disinformation techniques are vital investments in the resilience of democratic societies.
If all this seems like a tall order, then take heart from Goertzel who said that he was optimistic about the future and the ability of technology to combat technology.
“Look, yeah so you have these bad guys building systems, but so are we. There are AI tools that can spot manipulated content and check the context of internet content. And at least in the US we have a public that cares about democracy and advanced technology firms that give us an edge.”
Despite the rise of AI-enabled deepfakes and disinformation, democratic institutions are resilient and adaptable. By fostering transparency, regulation, collaboration, and education, we can safeguard the integrity of elections and ensure that technology serves to strengthen rather than undermine democracy.
#BeInformed
Subscribe to our Editor's weekly newsletter