Future Tense

Faux News Articles and Social Media Posts Will Haunt This Election

Facebook and Twitter need to do more to prevent that.

A smartphone screen displays the Twitter and Facebook logos.
DENIS CHARLET/Getty Images

On Feb. 24, the Free Speech Project will host an event called “Redefining Free Speech for the Digital Age in Washington.” For more information and to RSVP, visit the New America website.

Last September, an image of a New York Times headline began circulating online, claiming that Abdullah Abdullah, a candidate for the Afghan presidency, had taken millions of dollars from Pakistan. Though the Times never published such a story, the convincing fake image—complete with the font and website design—exploited longstanding divisions in Afghan politics during a closely contested presidential election. For its creator, this piece of content became a useful tool for undermining Afghanistan’s already-fragile politics.

Advertisement

On Feb. 4, amid growing concern about deepfakes—ultra-convincing fake images and videos created using AI—Twitter announced a new set of policies to address synthetic and manipulated media on the platform. Under its new policies, Twitter will examine whether a piece of media content has been altered or fabricated, if it has been shared in a “deceptive manner,” and if it is likely to “impact public safety or cause serious harm.” The policies also attempt to establish guidelines to gauge a user’s intent to deceive.

Advertisement
Advertisement
Advertisement

Tweets that violate these guidelines may be labeled, and users may be warned before they interact with manipulated content. But according to Axios’ Kyle Daly, Twitter’s policies set a “high bar” for removing manipulated content from the platform. Even fabricated content may remain on the platform if it doesn’t check all of Twitter’s boxes. The Verge’s Adi Robertson wrote that “this framework seems aimed at addressing specific high-profile problems on the platform, not running a scorched-Earth campaign against fake photos and video.”

Advertisement

In particular, Twitter’s new rules do not adequately address a particularly infectious form of manipulated content: fabricated images of webpages using popular browser features like Google Chrome’s DevTools and Safari’s Web Inspector that let even inexperienced users create highly convincing fake images of webpages. Take another example from 2018, when Twitter user Shawn Usher posted what appeared to be a screenshot of a 2015 tweet posted by Donald Trump, in which the future president purportedly declared, “If the Dow Joans ever falls more than 1000 “points” in a Single Day the sitting president should be “loaded” into a very big cannon and Shot into the sun at TREMENDOUS SPEED! No excuses!” More than 4,000 Twitter users shared Usher’s image falsely depicting a tweet the president never actually posted.

Advertisement
Advertisement

That tweet and the manipulated New York Times headline are alarming, but it’s hard to know how Twitter would handle those cases under its new policies. While Twitter would likely conclude that the content was “shared in a deceptive manner,” it is unclear whether it would consider the images to be “significantly and deceptively altered or fabricated.” Maybe the images would stay up on the platform, maybe they wouldn’t. This ambiguity highlights a key weakness in Twitter’s otherwise encouraging changes. The platform should build on their new policies by not only identifying which specific forms of manipulated media are subject to their guidelines, but explicitly listing fake content produced using web developer tools.

Advertisement

Armed with an accessible set of instructions, a relatively skilled internet user can create detailed imitations of web pages and distribute them across online platforms. These fabricated web pages could take advantage of social media audiences’ most innate psychological vulnerabilities. As the U.S. election season progresses, the twisted incentives to produce manipulated content will likely grow. The urgency and complexity of this problem requires considering a range of approaches to combating this highly infectious strain of disinformation. Twitter’s new approach to policing manipulated media represents a partial step forward—but it must continue adjusting these policies as new forms of manipulated media emerge.

Advertisement
Advertisement
Advertisement

There are no quick solutions here. But there are viable ways to combat this form of manipulated content. In the immediate term, members of the press, especially media outlets that emphasize technology coverage, should pay close attention to cases where fake web page imitations emerge and travel widely across the internet. Studying these early incidents—and even debunking manipulated images that target media organizations and public figures—will help policymakers, tech companies, and media outlets better understand which malign actors have realized the potential of features like DevTools and Web Inspector to spread disinformation.

Facebook recently announced its own policy on manipulated media, pledging to remove content that is “edited or synthesized” with artificial intelligence or machine learning, while excluding content made for parody or satire. Twitter’s new manipulated media policy offers clearer standards than those set by Facebook, but neither company should tackle this problem alone. In a recent report, our colleague Kara Frederick wrote that tech companies should build an “enduring disinformation-related consortium,” with the Global Internet Forum to Counter Terrorism as the template. A consortium like this, Frederick wrote, would allow social media companies to maintain a shared database of fake images and videos, or “hash,” in order to prevent disinformation outbreaks on one platform from traveling to new hosts. In the medium-term, a counter-disinformation consortium would help social media companies fight this unique form of manipulated media together and would encourage companies to work toward building a common set of content standards. Companies like Google and Apple should also consider whether the risks of making DevTools and Web Inspector so easily accessible to everyday users outweigh the features’ advantages.

Advertisement
Advertisement
Advertisement

Finally, social media companies should consider addressing one of the most crucial force-multipliers for online disinformation: speed. In November 2018, Justin Kosslyn, then at Jigsaw, wrote that “it is time to abandon our groupthink bias against friction as a design principle” when it comes to the internet. Social media platforms could adopt some form of the “pause button” advocated by Jonathan Rauch, as well as other features that could slow the spread of a disinformation outbreak. Twitter’s new policy includes the possibility of warning users before they share or engage with content labeled as “altered or fabricated.” Whether this measure helps counter fabricated content created with things like Devtools will depend on how Twitter ultimately interprets its definition of manipulated media. In the end, the sheer speed of the platforms will enable online users with their own biases and vulnerabilities to share disinformation at a harmful pace. Social media companies must weigh the advantages of that speed against the disadvantages of rapidly spreading disinformation.

While Twitter and Facebook’s new policies on manipulated media represent an imperfect step forward, the platforms’ efforts to combat fake content must not stop there. Opportunities for malign actors to exploit users’ vulnerabilities are growing as quickly as the tools to create manipulated content are advancing and adapting. It is unlikely that the fabricated New York Times headline decisively impacted the outcome of the Afghan election. In the future, other fragile political environments may not be so fortunate.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.

Advertisement