This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Year of elections: a deepfake threat on politics and business
Over two billion voters will be heading to the polls in over 50 countries this year, with candidates taking to the stages across the globe, releasing updated manifestos, and encouraging the public to tick their box.
And how these pledges are communicated the globe is changing with social media now playing a dominant role.
According to a study undertaken by Reuters and the University of Oxford, in major territories such as Japan, Australia, Brazil, and a number across Europe, nearly a third of people used Facebook for their news last year.
Approximately 20% used YouTube, 16% used WhatsApp, and 6% used TikTok. While numbers for the latter seem low, TikTok news consumption is increasing, and much higher in younger groups in some Asia-Pacific, Latin American, and African countries where it ranges from 20 to 30%.
Why does this matter? Because, as we’ve learned over the last decade, not all information online is true, and deepfake content has already been impacting elections.
Already, recent elections in countries including Kenya, Brazil, and Turkey have taken victim to misinformation including deepfakes.
The World Economic Forum identified misinformation and disinformation as the most dominant short-term threat for this year, and while that may have topped interstate armed conflict at number six, some say it may become a cause for warfare.
Last year, human rights activist and great-grandson of South African president Nelson Mandela, warned that deepfakes could spark a civil war or genocide in areas of Africa rife with tension.
“The dangerous thing about the spread of deepfakes is that they are not easy to track because once it hits WhatsApp it can be forwarded to as many people as it possibly can,” he said.
“Think of the impact and the damage it could cause. You could rally a particular ethnic group to attack another, in the split of a second.”
Meanwhile in the US, just last weekend some voters in the state of New Hampshire received a call with a deepfake audio of Joe Biden advising them not to vote in the presidential primary elections this week.
The phone message faking to be Biden said: “Voting this Tuesday only enables the Republicans in their quest to elect Donald Trump again. Your vote makes a difference in November, not this Tuesday.”
New Hampshire’s attorney general office was forced to release a statement debunking the call: “Although the voice in the robocall sounds like the voice of President Biden, this message appears to be artificially generated based on initial indications.”
In Europe, the UK electoral watchdog warned early last year that the nation’s elections could be affected by deepfakes, with laws “very old and really [in] need to be updated.”
“There is legitimate cause for concern regarding the potential impact of deepfake technology on elections this year,” says Tim Callan, chief experience officer at New Jersey-based cyber security firm Sectigo. The rapid advancements in AI mean more individuals now can create convincing deepfake scams, with little time or money needed.”
Cyber on deepfakes in elections
Callan believes that the upcoming elections mean there is a heightened risk that politicians may become targets.
“The sophistication of AI allows for the replication of every aspect of an individual’s appearance, encompassing their eyes, face, and voice.”
“A notable example is the deepfake audio featuring Keir Starmer (the UK’s opposition leader) towards the end of 2023. This underlines the need for vigilance and proactive measures to safeguard the integrity of the electoral process in the face of evolving technological threats.”
The audio perpetrating to be Keir Starmer swearing and abusing party staffers gathered 1.5 million views on social media platform X, which the politician himself was then forced to debunk.
“Deepfake videos could be engineered to fabricate campaign events, speeches, or debates featuring presidential candidates, with the aim of deceiving the public into endorsing manipulated views, opinions, policies, or worse,” says David Emm, principal security researcher at cyber security firm Kaspersky.
“To accomplish this, scammers employ technology to digitally analyse distinctive features of an individual, including their face, hair, body, hand gestures, vocal cues, and voice to build fake characters,” Emm explains.
“As the election campaign trail gains momentum, and more content is broadcasted with messages released, this is when fraudsters are likely to strike.”
Recently, Slovakia’s election became victim of deepfake content affecting its candidates and spread across Facebook and on chain emails.
The fake audio supposedly featured candidates discussing strategies on how to rig the elections, just two days before the vote.
To help tackle this, Meta recently announced new rules that required advertisers to disclose commercial AI alteration. The big tech firm which owns Instagram, Facebook, and WhatsApp is expecting advertisers to admit to AI alterations during the submission process if an ad “contains a photorealistic image or view, or realistic sounding audio.”
However, the firm’s president of global affairs and ex deputy UK prime minster Nick Clegg adds that advertisers aren’t required to disclose fake images that are “inconsequential or immaterial” to the claims made in the ad.
Also, while some have highlighted this as a step forward to a more trustworthy social media, the rules do not appear to cover posts made by individual users.
“The public must understand the importance of verifying everything they see online,” enforces Callan. “It may seem unnecessary, but by developing a healthy scepticism towards online content and cross-checking information from multiple reliable sources, the public should be better able to decipher real from fake.”
Last year, for instance, a deepfake of Turkey’s opposition leader in the run up to its election spread online. However, it was quickly debunked by viewers as the candidate unusually spoke fluent English in the video.
Another solution, Callan suggests, would be to start integrating built-in encrypted timestamps on all recording devices to serve as a watermark at the moment of capture.
“These encrypted watermarks should be based on the highly secure Public Key Infrastructure (PKI), providing flawless means to distinguish authentic content from deepfakes,” he said. “This approach aims to restore digital trust by implementing a reliable verification mechanism.”
Should your business prepare for a rise in deepfake scams?
According to identity verification service Sumsub, the number of deepfakes detected across all industries has increased ten times more in the last year.
Its head of AI and ML, Pavel Goldman says that “deepfakes pave the way for identity theft, scams, and misinformation on an unprecedented scale.”
The best way to prevent deepfake fraud, Callan says is through education: “By conducting regular training sessions which educate employees about the existence and risks of deepfake technology, they will be one step ahead when faced with a deepfake threat.”
“Businesses need to ensure that employees, as well as their systems, can verify the identity of the person they are communicating with.”
Plus, Callan advises businesses to include adopting stricter authentication measures, regularly updating security policies, and working with experts to stay informed on best practices.
Still, David Emm of Kaspersky adds that the impact on daily lives is, for now, likely limited: “In contrast to more commonplace scams like phishing attacks, the creation of convincing deepfake videos is a resource-intensive process, both in terms of cost and time.”
Another concern is the use of deepfakes to access business and bank accounts using biometric verification. Kaarel Kotkas, CEO of biometric verification platform Veriff ensures that with the right technology, this risk is marginal.
“You can’t rely on the information you see,” says Kotkas. “More than 70% of fraud found is actually from the data points that are invisible to human eyes.”
For example, if someone was to try and hack into a computer using a deepfake: “Visually, it might be okay for the human eyes, but essentially it could be that the frame rate doesn’t align with the camera device connected to the computer.”
To ensure a hyper secure firm using biometrics, a password could be conducted through a series of eye movements: “That is very hard for a synthetic content to replicate,” Kotkas suggests.
#BeInformed
Subscribe to our Editor's weekly newsletter