The Industry

Twitter Spent the YouTube Shooting Deleting Lies and Hoaxes. Could It Do That All the Time?

A YouTube sign is seen at YouTube's corporate headquarters during an active shooter situation in San Bruno, California on April 03, 2018.                      
        Gunshots erupted at YouTube's offices in California Tuesday, sparking a panicked escape by employees and a massive police response, before the shooter -- a woman -- apparently committed suicide.Police said three people had been hospitalized with gunshot injuries following the shooting in the city of San Bruno, and that a female suspect was found dead at the scene. 'We have one subject who is deceased inside the building with a self-inflicted wound,' San Bruno Police Chief Ed Barberini told reporters. 'At this time, we believe it to be the shooter.'
         / AFP PHOTO / JOSH EDELSON        (Photo credit should read JOSH EDELSON/AFP/Getty Images)
Tweets about the YouTube shooting were unreliable—because so many of them were intentionally misleading.
JOSH EDELSON/Getty Images

As word spread on Twitter that there was an active shooter at YouTube headquarters in San Bruno, California, trolls and misinformation artists sprang into action on the social news site. This time, so did Twitter’s own employees.

In a blog post Thursday, Twitter explains how it treated the YouTube shooting as a sort of information-crisis situation. The company’s response suggests that Twitter is taking on a more active editorial role in the midst of certain types of breaking news events. Yet the outcome of its efforts Tuesday suggests that’s going to be an uphill battle—and Thursday’s blog post helps to explain why.

The chaos on Twitter began shortly after YouTube’s Vadim Lavrusik tweeted Tuesday that there was an active shooter at his company’s headquarters. As BuzzFeed’s Jane Lytvynenko documented, Twitter quickly became flooded with competing and contradictory claims as to who the shooter was and what their motivations were. Hoaxsters circulated pictures of comedian Sam Hyde, falsely identifying him as the gunman; others reported that the shooter had been confirmed as a “white supremacist,” a Trump supporter, or a “Muslim refugee.” Meanwhile, Lavrusik’s account was hacked and tweeted that he had lost a friend in the shooting; it linked to a picture of the YouTube star Keemstar (who was fine). “In the chaos of an unfolding tragedy,” BuzzFeed’s Lytvynenko and Charlie Warzel concluded, Twitter is “no longer a helpful place to follow breaking news.”

That’s a harsh judgment on a company that prides itself on being a global source of real-time breaking news—especially given all that Twitter said Thursday about its efforts to improve.

Using a combination of software and human moderation, the company reports in its blog post, Twitter scrambled to combat potentially dangerous misinformation by suspending “hundreds” of accounts, preventing suspended users from creating new accounts, and requiring some users to delete tweets. At the same time, Twitter’s “Moments” team worked to quickly highlight credible tweets about the shooting from reputable and verified sources.

But Twitter has set for itself a particular challenge when it comes to combating misinformation: Spreading misinformation isn’t actually against its rules. “We do not have a policy under which Twitter validates content authenticity or accuracy,” the company noted in its blog post. “As we’ve previously shared, we strongly believe Twitter should not be the arbiter of truth.”

That makes sense when you look at Twitter as a tech platform. If it were in the business of vetting the content of every tweet for factual accuracy, it could never be the fast-moving global information hub that it is today. Factual disputes can be thorny, and there are thousands of tweets posted every second, in numerous different languages. It simply isn’t possible.

If Twitter can’t be an arbiter of truth, how can it fill its role as a breaking news source? That’s what its blog post tries to explain. Basically, when a major news event is unfolding, it steps up its enforcement of a series of other, existing policies against accounts that are spreading misinformation. For instance, Twitter asks:

Is the content posted to harass or abuse another person, violating our rules on abusive behavior?

Is this meant to incite fear against a protected category as outlined in our hateful conduct policy?

Could misrepresenting someone in this way cause real-world harm to the person who is targeted per our rules on violent threats?

Is this account attempting to manipulate or disrupt the conversation and violating our rules against spam?

Can we detect if this account owner has been previously suspended? As outlined in our range of enforcement options, when someone is suspended from Twitter, the former account owner is not allowed to create new accounts.

In such situations, Twitter adds, “we rapidly implement proactive, automated systems to prevent people who had been previously suspended from creating additional accounts to spam or harass others, and to help surface potentially violating Tweets and accounts to our team for review.”

The idea is that if the hoaxes are mostly being spread by roughly the same small minority of Twitter users, then targeting repeat offenders, harassers, and bots might knock out a significant chunk of the hoaxing apparatus, without the need for Twitter to factually evaluate each claim.

It’s fair to ask why Twitter doesn’t always enforce its policies this vigorously. The answer is probably that it doesn’t have the necessary resources to do so around the world, around the clock. The heightened enforcement also brings a greater likelihood that Twitter will mistakenly take action against an innocent account. That may be a risk it’s willing to run in cases where lives may be immediately at stake, but not in other circumstances. It’s unclear whether Twitter would step up its enforcement in this way during a nonviolent breaking news event that still might inspire hoaxes, like an election.

The company admitted Thursday that its solution remains a work in progress. It wrote:

We are continuing to explore and invest in what more we can do with our technology, enforcement options, and policies—not just in the U.S., but to everyone we serve around the world. Initial ideas include better use of our technology to catch people working to evade a suspension and identifying malicious, automated accounts, and more quickly activating our team to ensure a human review element continues to be present within our all of automated processes.

All of these efforts seem admirable and necessary. The question is: On a global platform that allows anonymity, and that is too big to police in real time, will they ever be enough?