Future Tense

Did Instagram Really Revoke the Verification of Venezuelan Dictator Nicolás Maduro?

On the left is a photograph of a smiling Juan Guaidó. On the right is a photo of Nicolás Maduro.
Venezuela’s National Assembly head Juan Guaidó and President Nicolás Maduro YURI CORTEZ/Getty Images

As violent protests erupted in Venezuela challenging incumbent President Nicolás Maduro, the United States joined several other countries in recognizing opposition leader Juan Guaidó as the legitimate head of state. But it wasn’t only the traditional superpowers or multinational authorities people looked to for a verdict. The signal also seemed to come from a newer geopolitical player: Instagram.

On Wednesday, just as the United States indicated its support for Guaidó, several observers noted that the Facebook-owned social networking platform appeared to have revoked Maduro’s “verified” status, implicitly rejecting his continued claim to office. Screenshots of Maduro’s noticeably unverified profile were juxtaposed with Guaidó’s, which did sport the coveted blue check mark, implying that Instagram had conferred the elevated status on the newly prominent leader. For instance, take this now-deleted tweet:

A screenshot of a tweet showing Juan Guaidó's verified Instagram apart next to Nicolás Maduro's unverified account.

An Instagram spokeswoman later clarified in a statement to Fast Company that “Nicolás Maduro was not verified on Instagram, and we did not remove verification from his account,” and Guaidó had apparently been verified several months before. But some had already latched on to the rumor, pointing to the social platform’s apparent move as a notable rebuke of Maduro. Kremlin-backed propaganda network RT used the rumor to further Russia’s position that Maduro is still the rightful leader of Venezuela, while arguing that U.S. (and U.S.-based social networks’ apparent) support of Guaidó is a clear example of Western extraterritorial interference.

This is far from the first time a tech giant has been accused of taking sides in a geopolitical quarrel. In 2010, Google Maps found itself in the middle of a live border dispute between Costa Rica and Nicaragua. In 2011, Facebook effectively acknowledged sovereignty of Kosovo despite the nation’s lack of official U.N. recognition. In 2013, Google again caused a minor diplomatic row by appearing to recognize the state of Palestine by changing the label on its localized search page from “Google Palestinian Territories” to “Google Palestine.”

This isn’t even the first time social networks’ profile verification features have become deeply politicized. Twitter has learned this lesson many times over, its own checkmark verification regime sparking actual diplomatic outcry by the Ukrainian embassy to the U.K. when the platform appeared to legitimize Russian control over Crimea by verifying a Russian diplomatic account in the contested territory.

Concerns about verified status extend beyond international diplomacy: Twitter was celebrated by some and criticized by others for choosing ideological sides when it revoked the verified statuses of right-wing trolls and white supremacists. The company admitted that while the tool was intended to be a neutral indicator of authenticity, “verification has long been perceived as an endorsement.” It updated its policy to allow the status to be removed if verified users engage in hate speech and other threatening behaviors that violate the platform’s code of conduct.

Platforms must also sometimes decide whether to consider removing controversial and authoritarian leaders outright. Instagram blocked, unblocked, and then re-blocked Chechen dictator Ramzan Kadyrov after the U.S. Treasury Department sanctioned him under the Magnitsky Act. (As a U.S. company, Facebook explained the move as a “legal obligation.”) Facebook banned a number of Myanmar military officials after a U.N. report recommended the generals’ prosecution for war crimes, though media reports and human rights advocacy also reportedly informed the company’s decision.

But official pronouncements and diplomatic statements are frequently made on social platforms. Deactivating national leaders’ accounts could interfere with diplomacy or domestic matters. Removing verification, meanwhile, could lead to confusion about the authenticity of these statements, making it exceedingly difficult to spot misinformation that would inevitably come from imposter accounts.

In these sorts of gray areas, tech companies need to weigh their own internal policies against relevant laws, public pressure, and immediate circumstances or threats to make rapid and highly public determinations. The outcomes of these decisions can be inconsistent, and their justifications rarely satisfying.

As UCLA law professor Kristen Eichensehr has argued, technology companies prefer to fashion themselves as “digital Switzerlands.” In international customary law, such a designation might be enough to shield a country from conflict. On the internet, though, the situation is far murkier.

Technology companies cannot be Switzerlands. They are now inherently political actors, and their product and policy decisions increasingly have real world effects. Companies need to decide, for instance, whether oppressive countries should have access not only to social platforms but to artificial intelligence tools like facial recognition, and are scrambling to craft principles to guide product and business decisions that have implications for human rights.

While Instagram’s profile verification policy appears to have been a red herring in the ongoing story of Venezuela’s upheaval, the thousands of little blue check marks scattered around the internet should serve as a reminder of the widespread influence technology companies have in shaping public perceptions, and of the potentially serious consequences that a few pixels can have on matters of enormous social significance.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.