Russia’s interference in the 2016 election cycle was a wake-up call for many. But in spite of the uproar it has caused in U.S. politics, the reality is that Russia’s tactics were surprisingly unsophisticated, flawed and yet had a huge impact in many respects.
What every campaign from this point forward should be worrying about is how digital interference efforts will evolve over the next few years to become much more targeted, disruptive and harder to prevent.
More sophisticated methods and technologies could be leveraged to cause incredible damage in these campaigns. One of the more disturbing threats is the potential use of “deepfakes” — highly convincing, but ultimately fake videos created by deep learning technology. Here’s what you need to understand about this technology.
Where the name comes from
The term deepfakes comes from a sub-Reddit forum where this capability first gained widespread attention. Until now, the creators of deepfakes have mostly limited themselves to making celebrity spoofs and fake pornography. But the potential for deepfakes clearly goes far beyond these uses.
Researchers at the University of Washington used deepfakes to create a doctored video of former President Barack Obama. In May, a Belgian political party created a fake video of President Donald Trump delivering a speech against climate change. One notable point with the Belgian video is that as unsophisticated as it was, it still fooled many people — to the point where the video’s creators had to turn to social media to do damage control and explain that it was intended as a practical joke.
The U.S. government isn’t laughing. Lawmakers sent a letter to the Director of National Intelligence last fall, urging him to assess the threat this new technology poses to national security. The danger posed by deepfakes was later included in the DNI’s 2019 Worldwide Threat Assessment report as a top threat to the 2020 election cycle: “Adversaries and strategic competitors probably will attempt to use deep fakes or similar machine-learning technologies to create convincing—but false—image, audio, and video files to augment influence campaigns directed against the United States and our allies and partners.”
Risks for candidates, parties and groups
Deepfakes pose a number of significant risks for political campaigns. They could be used to create fake controversies by literally putting words in a candidate’s mouth, or by editing the footage of their rallies and fundraisers to portray supporters in a negative light. Candidates could themselves be tricked by deepfakes, and accidentally retweet or otherwise endorse a fake video created against an opponent or about an issue they care about, thereby damaging their public image and credibility.
Deepfakes could trickle out in social media channels over a period of weeks or months to distract a campaign and draw away resources, or they could be used in a “swarm effect” by unloading multiple forgeries all at once in the final days of a campaign.
The scope of the problem
Combatting deepfakes is tricky because even when a video is exposed as a forgery, in many cases the damage may already have been done and changing public perception which is something they have visually seen and heard becomes even more challenging.
It’s no secret that in politics, as in other areas of life, people often have strong beliefs and preconceived notions, and those beliefs don’t necessarily change just because the facts say otherwise (known in psychology as “cognitive dissonance”). People may refuse to believe a fake video really is fake. They may call any effort to discredit it a conspiracy, or they may accept the fact of a forgery, but still be psychologically influenced by the negative characterization it presented of the candidate.
Not to mention that disqualifying a video as a deepfake, and having it removed from a social media platform, is not necessarily a given.
Candidates and their supporters will have to appeal to the major social media companies each time a fake video emerges in order to have it removed. Then they’ll have to counter the video’s claims by going public against it — either through direct communications (social media, press releases, blog posts and so on) or the mainstream media.
But how many potential voters who saw the fake video will also see the retraction? And how often will the media choose to cover these stories, particularly in local races? If deepfakes become widespread, they will also lose their novelty effect for the press. What then?
The issue is far more complicated than it may seem. Even with new artificial intelligence-based tools that are being developed to detect deepfakes, the resolution for candidates may be more difficult and unsatisfactory to say the least.
How to identify a deepfake
The quality of deepfakes will vary by the creators’ skill and experience, and the tools they use. Some deepfakes may be extremely realistic and hard to detect. But in many cases it may be possible to find defects in the video which expose it as a forgery.
Here are a few things to look for:
- Cropped effect in the neck, eyes or mouth
- "Glimmering and fuzziness" in the mouth or face
- Box-like shapes in the mouth, teeth or chin
- Differences in skin tone
- Inconsistencies in the background or lighting
- Unnatural movements
- Irregular blinking
Advice for countering deepfakes
Individual campaigns will find it difficult to address this threat on their own. It’s a problem that requires a multilevel response: strong coordination between the county, state and national parties, and between candidates, their party leadership, law enforcement, and news media.
There’s no doubt that parties need to lead on this issue. A good model to follow is what some news organizations are now doing, like the Wall Street Journal (see here). That model starts with establishing a central task force to research and develop better strategies to counter the threat, coordinating with outside groups (government, universities, private companies), which have expertise in this field to better understand the technology and conducting regular training for staff.
Top level campaigns also need to establish social media teams capable of 24-hour monitoring, particularly as a race draws closer to the finish line. These teams should be able to spot deepfakes early on and escalate the response effort.
Through better coordination, training and vigilant social media monitoring, campaigns will be able to mitigate the potential damage from a deepfake attack.
David Kennedy is a former hacker for the NSA and Marine Corps and the founder/CEO of TrustedSec, a white hat hacking, and cybersecurity advisory firm. He is also the former chief security officer of Diebold, which used to manufacture voting machine equipment, and has testified twice before Congress about cyber threats to the U.S. government.