Why Wikipedia’s “Nuclear Option” Is the Right Call

A bureaucratic change is coming, in part to keep Siri and Alexa from giving people bad information.

A mushroom cloud envelops the Wikipedia logo.
Photo illustration by Slate. Photo by RomoloTavani/iStock/Getty Images Plus.

At this year’s WikiConference North America, Kevin Li, a 17-year-old freshman at Stanford, told me that Wikipedia administrators like him have three special powers. They can 1) block and unblock users, 2) protect articles from persistent vandalism, and 3) hide changes from the project’s editorial history. Administrators receive this bundle of high-level technical abilities after they pass a thorough community review process.

But recently, that first superpower was dialed back a bit: An administrator is no longer able to self-unblock. That means that if Administrator A is blocked by Administrator B, A can no longer lift the ban himself. It may seem like a somewhat arcane bureaucratic change, but some in the Wikipedia volunteer community are calling it the “nuclear option” (not to be confused with the U.S. Senate procedure of the same name).

Understanding the online encyclopedia’s motivations to go nuclear requires a brief history lesson. Wikipedia’s open platform has been challenged by vandalism since its founding in 2001. The bad edits range from the humorous (for example, suggesting actor Jeremy Renner is a velociraptor) to more serious (like the Siegenthaler biography incident). Over the past decade and a half, the length of time that vandalism remains on the site has been greatly reduced. Articles on certain high-profile subjects have been protected to varying degrees from open editing. For example, Renner’s page is now semi-protected, meaning that editors must have registered a user account to edit and cannot edit anonymously from an IP address. President Donald Trump’s page has extended confirmed protection, also known as 30/500 protection, meaning that only editors who have been registered on Wikipedia for more than 30 days and have made more than 500 edits may modify the article. In rare circumstances where legitimate editors cannot get along due to content disputes, an article may be full-protected so that only administrators can make changes. The entry on the Vasco da Gama Bridge in Portugal has been fully protected for a cooling-off period until Dec. 24 after an edit war erupted about whether it was the longest bridge in Europe.

Automated bots also help police the pages—for instance, ClueBot NG quickly reverts probable vandalism based on its machine-learning algorithm and a database of common indicators like expletives and other bad words, or unencyclopedic punctuation like “!!!11” There also remains a contingent of dedicated human volunteers who monitor the site, such as the 259 people who have Renner’s article on their watchlist. If somebody tried to make Renner into a dinosaur again, the change would be speedily reverted by A.I. or human page-patrollers.

But those guardrails mostly protect against entry-level vandalism. It’s much harder when the misbehavior is more advanced—like when the vandal uses the login credentials of an administrator. Those breaches have historically been somewhat more challenging to remediate because the vandal takes on the bundle of special admin powers. Li told me that admin breach incidents typically followed this pattern:

1) The bad actor hacked into an administrator account, perhaps using reputable published password lists from websites that have been hacked, which include Myspace, Adobe, and Experian.

2) The bad actor wreaked havoc, using the powers of an admin account to block users and make changes to protected pages.

3) When the hacker was blocked, they used their capability to self-unblock. (Something they could do before last month’s nuclear option went into effect.) The hacker continued to cause harm until one of Wikipedia’s 34 global stewards picked up the problem and shut down the compromised account.

We’ve seen that cycle play out a few times recently. For instance, a hacker used a compromised admin account and then self-unbanned to blank the main page of English Wikipedia altogether. On Thanksgiving, Siri briefly displayed a lewd image when asked about Trump due to recent Wikipedia vandalism. The persistent vandalism by several users continued throughout the weekend and included a compromised admin who had self-unblocked. Even when the bad actor has accessed an administrator’s account, their changes are usually quickly reversed, often within a few minutes. But like other large tech companies, Apple is increasingly dependent on the encyclopedia to provide information for its question-answer functionality. Even if the Wikipedia community quickly undoes the bad edit, these third-party tech companies could retrieve content during the short window of vandalism, and the bad information can remain on Siri, Google, or Alexa for a longer period.

Enter the nuclear option. Supporters pointed out that removing a hacked administrator account’s ability to self-unblock would have shortened the damage interval in several recent situations. If an admin’s account were compromised and causing problems, then another administrator could block that admin. Without self-unblock, the bad actor would be immobilized. The nuclear option, therefore, leverages the power of numbers: English Wikipedia’s 1,194 administrators could likely detect the hacker and block them more quickly than the 34 global stewards, who have technical responsibilities across all Wikimedia projects. Vandalism issues that had previously taken minutes to fix (like the recent Donald Trump incident) could potentially be solved within seconds.

But opponents of the nuclear option pointed to a potential doomsday scenario in which a bad actor could hack an admin account and mass-block every other administrator. Without the capability to self-unblock, the administrators would be helpless. Theoretically, the bad actor could hold Wikipedia hostage for a while, or at least until a steward with greater authority exercised the power to mass-remobilize the rest of the admin community. (Sidebar: Does this sound like the most epic game of freeze tag to anyone else?)

Members of the Wikipedia community have described two credible countermeasures to this unlikely encyclopedia apocalypse: One, an administrator’s ability to block other administrators could be rate-limited, which would prevent a massive attack where a compromised account blocked everyone else in rapid succession. Two, the Wikimedia Foundation could release a further modification to the code that would allow a blocked administrator to block the admin account who blocked them (but not others) thereby removing the first-mover advantage. In keeping with the nuclear deterrence theme, this block-the-blocker move would essentially result in a stalemate.

Even before the November decision on the nuclear option, Wikipedia had been carefully considering potential security issues with administrator accounts. A 2011 community resolution resulted in the decision to deactivate administrator accounts that had been inactive for more than one year, under the theory that these old accounts were more vulnerable to hackers. Since 2016, the site has been strongly encouraging two-factor authentication, or 2FA, for all Wikipedia administrators to add an additional degree of protection to the account.

Wikipedia administrator Molly White (username GorillaWarfare) was supportive of the recent change to remove the self-unblock capability for English language Wikipedia but wished the change had not necessarily been implemented globally across all Wikimedia projects. Other language encyclopedias and wikis often have fewer administrators, and those admins might benefit from the ability to self-unblock in the event of a breach.

White also said the community should think twice before requiring 2FA for everybody. “Wikipedia is a global project with users from all sorts of locations and socioeconomic situations, and I think it’s important that we do not restrict people from editing because they cannot pay for a cellphone,” she wrote in an email. The 25-year-old Boston software engineer suggested that the nonprofit Wikimedia Foundation should consider providing physical security keys as a secondary authentication option. These hardware tokens often connect via USB and do not require owning a smartphone, which would keep a level playing field. Furthermore, hardware tokens seem to be effective. Since transitioning to physical security keys last year, Google has not had a single employee phishing incident.

The fact that members of the Wikimedia community are seriously contemplating a security measure implemented by one of the most valuable tech companies in the world suggests that they are keenly aware of the project’s central role in the information ecosystem. With Big Tech increasingly relying on its information, the open knowledge platform is positioned for an elevated role as the essential infrastructure of free knowledge.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.