Future Tense

No, Wikipedia Is Not Colluding With DHS

The Homeland Security System Advisory System, ranking alert levels from Low to Severe, next to the Wikipedia logo.
Photo illustration by Slate. Images via Wikipedia.

Welcome to Source Notes, a Future Tense column about the internet’s information ecosystem.

Shortly after Elon Musk gained control of Twitter on Oct. 27, the mysterious-sounding hashtag #DHSLeaks began trending on the platform.“#DHSLeaks changes everything   Now we know who was behind the algorithm, trending, Wikipedia, and search results all along   It was feds,” tweeted Jack Posobiec, an alt-right political activist. Supporters of the conspiracy theory pointed to a story from the Intercept suggesting that the Department of Homeland Security was secretly pressuring Twitter, Facebook, Reddit, LinkedIn, Wikipedia, and other platforms to review or remove certain content.

Advertisement

But is there any substance to the claim that the feds have been deciding what information should be published on Wikipedia and other sites? There is not. As Techdirt’s Mike Masnick rightfully argued, the Intercept’s story about the U.S. government arbitrating disinformation on tech platforms like Wikipedia is “absolute garbage” and “bullshit reporting.”

Advertisement
Advertisement
Advertisement

I’ll add one more criticism to the list: The false framing is insulting, especially to the volunteer Wikipedia editors who do the hard work of curating reliable information for the site. Because the Wikipedians are not controlled by Uncle Sam.

To understand how the #DHSLeaks conspiracy theory came about, it’s worth reviewing the history. In 2018, President Donald Trump signed the bill creating the Cybersecurity and Infrastructure Security Agency as a separate agency within DHS. CISA’s mission is to lead cybersecurity and critical infrastructure security programs, operations, and associated policy.  (DHS previously had an internal program called the National Protection and Programs Directorate with these responsibilities; the creation of CISA promoted this function to a standalone federal agency at the same level as the U.S. Secret Service or FEMA.)

Advertisement
Advertisement

In the Intercept’s version of events, DHS met with private tech platforms “behind closed doors” using the full power of the U.S. government to “try to shape online discourse.” OK, yes, CISA met with nine major tech organizations beginning August 2020 to discuss how to protect the integrity of the presidential election, especially the potential for foreign actors to interfere. But contrary to the Intercept’s framing of the story, those 2020 meetings were public knowledge. In fact, the tech organizations issued a joint public statement at the time, notes were shared publicly, and participation in the meetings was widely covered by the media. (Perhaps the most frustrating aspect of the Intercept’s reporting is that they describe documents that were publicly posted to CISA’s website as being “obtained via leaks.”)

Advertisement
Advertisement

Likely realizing that its authors would not be able to substantiate any claims about direct government interference, the Intercept story uses cagey language, saying “The extent to which DHS initiatives affect Americans’ daily social feeds is unclear.” But the tech organizations themselves say there was no secret pressure to change content. “The government has never raised issues or questions specific to any Wikimedia Foundation sites or content on our sites in those meetings, and the Foundation has never changed content on our sites due to these meetings,” said Jan Eissfeldt, global head of trust and safety at the Wikimedia Foundation, the nonprofit organization that supports Wikipedia, in an email. According to Eissfeldt, the meetings consisted of high-level updates on what participants were seeing on their respective platforms and scenario planning related to election results

Advertisement
Advertisement
Advertisement
Advertisement

While DHS did not make any content demands, it is true that Wikipedia volunteers themselves took action to protect knowledge integrity on the site. Back in October 2020, Molly White, a volunteer Wikipedia and administrator, added what Wikipedians call “extended confirmed protection” to the 2020 United States presidential election page. This locked the entry so that it could only be changed by editors whose accounts are at least 30 days old and who have made at least 500 edits—preventing an online free-for-all where supporters of either Trump or Biden declared victory before all votes were counted. Today, the 2020 election page is subject to the more relaxed standard known as “semi-protection,” meaning that it cannot be edited by unregistered users (IP addresses) or by new accounts with very few edits. Essentially, the Wikipedians are trying to prevent “drive-by” vandalism where someone registers a new account for the express purpose of wreaking havoc.

Advertisement
Advertisement
Advertisement

Notice how in each case the ultimate decision about Wikipedia’s content was made by volunteer contributors—not by the feds or an actual employee of the organization. Social media sites employ thousands of content moderators, which is why tech writers are sounding the alarm about the user experience on Facebook and Twitter following massive layoffs at those companies. By contrast, paid employees have very little direct influence over Wikipedia’s content. “This is by design to help protect the independence of volunteers to make decisions about what content is included on Wikipedia and how it is maintained,” Eissfeldt said.

In fact, volunteer Wikipedians consider it a core duty to protect the encyclopedia from the kind of manipulation the Intercept article describes. Several told me that they use a combination of human observation and technical tools to determine whether users are genuinely aiming to publish neutral articles—or to use the wiki phraseology, are “here to build an encyclopedia.” Girth Summit, the username of a Scottish-born Wikipedia editor and administrator who actively monitors for disinformation, said in an email that one of the main signals that someone is trying to add propaganda to Wikipedia is that they seem to take a one-sided approach, making changes without engaging in civil dialogue or substantiating their proposed source.

Advertisement
Advertisement
Advertisement

Once a disruptive user engages in a series of unhelpful contributions to Wikipedia, like edit warring on a page to reinsert their preferred version, the site’s volunteer administrators will move to block that username. But people who have been blocked on Wikipedia often create alternate usernames—so-called sockpuppet accounts—in order to bypass that block. “Editors who have dealt with them [sockpuppets] before often spot behavioral tells—a particular set of articles they’re interested in perhaps, an esoteric viewpoint, certain quirks in their use of the English language; even just things like the times of day when they are editing can sometimes be useful indicators,” Summit said.

Once enough evidence has been gathered in a sockpuppet investigation, a small group of trusted Wikipedia volunteers called checkusers have the ability to determine the IP address for an account. By looking up the IP address, the checkuser can reveal that the puppet master is behind a new crop of dummy accounts—and proceed to block those, too.

Advertisement

The disinformation monitoring process involves a lot of work for Wikipedia’s editors—volunteers who, once again, are not the CISA agents that the Intercept describes as “truth cops.” There are, however, credible threats to Wikipedia’s information landscape. A report released by the Institute for Strategic Dialogue in October analyzed the activities of 86 banned editors and found many attempts to alter content on Wikipedia to push the Russian narrative about the invasion of Ukraine. The bad actors tended to show certain tendencies, such as repeatedly adding quotes from the Kremlin into articles. Or, they would only cite RT, Sputnik, and other Russian state-controlled media sources, even though Wikipedians have decided that these sources are generally unreliable and not to be used. In most cases this pro-Russia agenda-pushing was quickly caught and fixed by Wikipedians. Overall, the report’s authors noted that information itself could join air, land, space, and cyber as another “theatre of war.”

Advertisement
Advertisement
Advertisement
Advertisement
Advertisement

It’s hard to tell whether any of the banned users mentioned in the ISD report were actually employed by the Russian state. According to Trenton Schoenborn, a conflict and political situation reporter, the more likely scenario is that Russian officials have made a centralized decision—for instance, that Putin’s invasion of Ukraine is a “special military operation.” Then, they allow that message to “drip down” through Russian nationalist supporters, a group that might have included whoever tried to copy the Moscow narrative into Wikipedia.

Other countries may be more organized than Russia when it comes to their Wikipedia strategy. In October 2021, I covered how the Wikimedia Foundation banned seven Wikipedia users and removed administrator powers from 12 users who were affiliated with a group known as Wikimedians of Mainland China. This action followed reports that WMC was seeking to control the Chinese-language version of Wikipedia and skew its content toward a hardline Chinese nationalist point of view.

Advertisement

When I asked Wikipedia contributors whether Western governments should communicate with Wikipedia representatives, they said yes, but with firm boundaries. “It’s good for democratic governments to be involved in fighting disinformation and propaganda, especially that of malicious actors like Russia, as long as there’s a clear line and the government isn’t trying to control the tech platforms, but rather shares information and provides support,” said Anton Protsiuk, a Wikipedia editor and administrator from Ukraine, in an email.* For him, the transparency in addressing potential disinformation threats during the 2020 election was a step in the right direction.

For all the fear and doubt spread by the conspiracy theorists on Twitter, the story about DHS-Wikipedia collusion does not hold up. There were no secret back-channel talks, no requests for changing content. The truth is much more mundane: The feds reached out to tech organizations to share intelligence that bad actors were interested in disrupting their platforms; the participants told the world about it. By misrepresenting reality to suggest that there was secret pressure, the conspiracy theorists behind “DHSLeaks” are taking a page from Moscow’s playbook.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.

Correction, Nov. 21, 2022: This article originally misquoted Anton Protsiuk as referring to “malicious actors in Russia.” He said “malicious actors like Russia.”

Advertisement