Since the weeks leading up to Russia’s invasion of Ukraine, warnings have been circulating that Russia might use deepfake videos—convincing fake videos created with artificial intelligence— in the surrounding information war. Perhaps they would use deepfakes to fabricate a pretext for the invasion, or to have Ukrainian President Zelensky issue an order to surrender. With bated breath we waited, but no sign of deepfakery occurred. Then finally, on March 16—20 days into the invasion and 13 days after Ukraine warned this exact scenario might happen—a deepfake of Zelensky surrendering indeed appeared, and it was … unconvincing and obvious.
The video editing was low-quality, and the voice was noticeably off; few people seem to have been fooled by it. Not to take any chances, Zelensky himself quickly addressed the deepfake, leaving no doubt about its falsity.
One is tempted to conclude from this episode that deepfake algorithms are just not powerful enough yet, that the dreaded moment when deepfakes wreak havoc in political arenas is still off in the distant future, if it ever arrives. However, a closer inspection of the past few years reveals a different story: The destabilizing effects of deepfakes in politics have already arrived—not in a single widely publicized scandal or a flood of numerous incidents, but in an ominous trickle of subtle yet impactful incidents that largely slipped under the radar. This Zelensky deepfake was quickly put to rest, but there are other videos from past years whose authenticity is unclear—even today we don’t know whether they are deepfakes or real. The difficulty in debunking deepfakes isn’t just a question of technological sophistication.
Three years ago, Mother Jones reported a story surrounding Ali Bongo, the president of Gabon. Bongo was hospitalized for an undisclosed illness in October 2018. Two months later, the vice president announced that Bongo had suffered a stroke but was recovering and doing well. However, other than a few photos and a silent video released by the government, there were no signs of Bongo during this time. Speculation proliferated that the officials were lying, that Bongo was in far worse condition than they admitted—and possibly even dead. To help allay these concerns, on Jan. 1, 2019, the government posted to social media a video of President Bongo giving the customary New Year’s address.
But something didn’t seem right.
Bruno Ben Moubamba, a prominent Gabonese politician who ran against Bongo in the previous two elections, claimed the video was a deepfake. He argued that Bongo’s face seemed strangely immobile and that his eye movements did not appear synchronized with his jaw movements. Julie Owono, the international technology lawyer who brought this Bongo saga to the attention of Mother Jones, noted that Bongo blinked only 13 times during the two-minute video, which she said was less than half the typical amount. The deepfake theory rapidly gained a sizable following. Activists argued that Gabon’s ruling party used deepfake technology to hide Bongo’s dire state of health in order to avoid a special election mandated by law to occur if the president is unfit to lead.
One week after the release of the enigmatic New Year’s video, Gabon’s military attempted a coup and explicitly cited the oddness of the video as evidence that the president was absent and that the government was lying about it. The coup failed and the government retained control, and in August 2019 Bongo finally made his first public appearance since the stroke.
We still don’t know whether Bongo’s party used a deepfake video to deceive the public long enough for him to recover—and in doing so illegally avoided a special election that might have led to his ouster. One of the complications is that all the oddities in the New Year’s video that suggest it’s a deepfake could also be the result of Bongo’s stroke.
This event in Gabon foreshadows one that took place in the United States a year later.
On Oct. 2, 2020, just weeks before one of the most important elections in American history, news broke that President Trump had tested positive for COVID-19. Questions mounted throughout the day over the severity of his illness. At 6:31 p.m., he tweeted an 18-second video in which he said that he was heading to Walter Reed, but he reassured people that he thinks he is doing very well.
Just as with Bongo’s New Year’s address, this video looks very strange. Trump has an uncharacteristically flat affect, motionless manner, and vacant look in his eyes. Immediately there was talk on social media of it being a deepfake to hide the dire state of the president’s health.
This time, suspicion quickly faded as more footage of Trump appeared. The theory was fully dispelled when he gave a live address upon departing Walter Reed a few days later. But for a brief moment, it really was hard to know what was going on and what to believe. The existence of deepfake technology meant that Trump’s 18-second video raised more questions than it answered. In hindsight, the things that made the video appear to be a possible deepfake were likely just a result of the president being ill and possibly medicated.
Days later, Trump’s team posted a video of him speaking from the White House lawn, and once again skepticism arose. Some claimed that it looked like the background was “glitchy,” as though there were a green screen behind him. But Slate quickly debunked the doubts.
A few months later, another instance of deepfake dystopia occurred, this time in Myanmar. On Feb. 1, 2021, a military coup officially began when Prime Minister Aung San Suu Kyi—who in the 2010s played a key role in transitioning Myanmar from military rule to partial democracy—was arrested and deposed, along with other members of her ruling party. The president of the U.N. General Assembly called for her immediate release; the U.N. secretary-general said the coup is a “serious blow to democratic reforms in Myanmar.” Suu Kyi is a Nobel Peace Prize Laureate, and while she has received international criticism for her country’s role in the genocide of the Rohingya people and her denial of the atrocities that were committed, she is a legitimate democratically elected leader; the military needed to justify the brazen steps it took.
A military-run TV station broadcast a video recording of a detained former regional chief minister providing a public confession in which he said he bribed Suu Kyi. This seemed to establish the military’s claims that Suu Kyi is corrupt and violated ethics laws. Some claimed the minister’s voice in the video doesn’t sound like his usual one and noted that the visuals look strange: his facial movements follow a repetitive pattern and his expression looks oddly emotionless. To this day, some people suspect it is a deepfake, while others believe the unnaturalness observed in the video is the result of it being a forced confession and poor teleprompting. Once again, we don’t know for sure. The video is somewhat low-resolution and grainy, which makes distinguishing between a deepfake and an authentic video particularly challenging. Suu Kyi is currently under military arrest and facing an array of charges, including corruption—which carries a maximum penalty of 15 years in prison.
What we see in these examples is that the context of a video plays an enormous role in how difficult it is to determine its authenticity. People speak in a somewhat unusual manner in deepfake videos—but, as became apparent with both the Trump and Bongo incidents, people also do so when infirm. The stress of duress, such as the Myanmar official was experiencing, could also alter one’s speech patterns. And the resolution of a video matters: the flaws in deepfakes are more apparent in high-res footage. Finally, the political situation matters. For instance, the Zelensky deepfake was a foolish endeavor not just because it was too amateur to be convincing—it was easy to debunk because Zelensky could respond to it.
Don’t be lulled into a false sense of security by this ill-conceived Zelensky deepfake. More difficult deepfake situations have already occurred in the past and more will surely arise in the future.
Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.