Twitter is in crisis, and vultures are circling. Circling but not biting: Of the four companies named as potential buyers in recent weeks—Disney, Microsoft, Salesforce, and Google parent Alphabet—all have since withdrawn their names from the running.
There are plenty of reasons for Twitter’s struggles, and plenty of reasons why a buyer might shy away. But one has emerged this week in multiple reports as perhaps the ultimate deal-breaker for at least two of the firms that had been eyeing a bid: abuse.
Disney backed out of its bid “partly out of concern that bullying and other uncivil forms of communication on the social media site might soil the company’s wholesome family image,” Bloomberg reported on Tuesday, citing anonymous inside sources. That followed a report on Monday by CNBC’s Jim Cramer that “trolls were part of the reason” Salesforce walked away. After spiking to its highest price of the year on the acquisition rumors, Twitter’s stock has crashed back to below where it was before the talks started.
Twitter’s brain trust, which skews white and male, has tended to regard harassment as a sort of necessary evil—a nuisance rather than a fatal flaw. That view has not been universally shared by the site’s users, especially women, religious minorities, and people of color, who have long found their mentions polluted with venomous personal attacks. Many, though not all, of these assaults come from strangers using anonymous accounts. To users’ pleas for help, Twitter’s response over the years has fluctuated between coldly dismissive and earnestly bumbling, as Charlie Warzel documented in a BuzzFeed story that dubbed the network “a honeypot for assholes.” At the same time, Twitter’s business has suffered from a perception that many of its users are fake. Spam bots, sex bots, and duplicate accounts crop up on the service with dispiriting regularity despite the company’s ongoing efforts to rein them in. For all of Twitter’s efforts, the problem seems to be getting worse.
Such ugliness is not unique to Twitter. On the contrary, it’s hard to think of a social media service or other online commons that doesn’t suffer from its share of abuse. But relatively few share Twitter’s commitment to both anonymity and publicness, for good reason: It’s a potent combination, one that invites people to broadcast vile sentiments they’d never post under their real names. One notable site that does share these traits: Reddit, whose own image and value have been permanently damaged by its well-documented problems with racist trolls and sexual predators. Another: Yik Yak, the anonymous location-based bulletin board whose wildfire growth was doused by bullying to the point that it has begun to pivot toward private messaging.
Until recently, Twitter’s leaders could make a relatively convincing case that its abuse problem did not pose an existential threat to its business. After all, it’s user growth, not increased user satisfaction or a more wholesome brand image, that investors are clamoring for. And so Twitter has largely resisted significant changes to its core product while grasping instead for fresh growth opportunities, such as its social streaming app Periscope or its contract with the NFL to simulcast football games. Its approach to spam and abuse, meanwhile, has included periodic purges of certain types of fake accounts, coupled with the occasional ad hoc ban of a real person. More recently, it introduced a quality filter that seeks to keep bots and hate speech out of people’s mentions. Still, critics have pointed out that Twitter seems far quicker to ban people for harmless copyright infringement than for threats or racial slurs.
For better or worse, that is no longer a tenable position. Fairly or unfairly, Twitter’s stock valuation already reflected its relatively limited prospects for user growth. Now it also reflects the company’s status as a mess that no acquirer wants to take on. And that is a direct result of its ineffectual approach to trolls.
Twitter could tackle this problem in one of two ways. It could become less open and public, as David Auerbach suggested in a January Slate column that looks more prescient by the day. His “drastic plan” to save Twitter is well worth reading. Alternatively, Twitter could remain open and public, but step back from its commitment to anonymity. This, to me, is the simpler path. It could be accomplished without changing the core architecture of the service, via a feature that it already offers: user verification. Best of all, this approach would preserve the potential for accounts to remain anonymous and still be widely heard and followed—provided people choose to hear and follow them. Facebook has a real-name policy, and it’s controversial. This would be more like a real-name nudge.
The approach would work like this:
Step 1: Gradually open verification to all users, including public and private individuals, brands, and organizations, who are willing to use their real names as their handles. (Today, verification—signified by a blue check mark next to a user’s name—is available only to “accounts of public interest.” As of July, less than 0.1 percent of all users were verified.) Verification would also be extended, on a case-by-case basis, to those who have legitimate reasons to employ pseudonyms on the site, like the San Francisco drag queens who were unfairly booted from Facebook. This would take a lot of time and some ingenuity, as Twitter couldn’t possibly conduct a manual review of everyone’s identity. It would need a way to accomplish the bulk of the verification process passively, via software, and to guarantee the privacy of users’ information. But the process wouldn’t have to be foolproof—just good enough to filter out the bots and the sock puppets using burner email accounts. After that, Twitter would rely on user reporting to flag fake accounts that slipped through verification. Importantly, no Twitter user would be required to get verified, and no one would be kicked off the service for declining. The process would be opt-in, albeit with a carrot and a stick, as I’ll explain in Steps 2 and 3.
Step 2: Gradually incorporate verification as a signal in Twitter’s “quality filter,” an option it already offers to those who want their mentions algorithmically filtered for bots and fake accounts. Use machine learning to improve the quality filter to the point where it can reliably filter out the majority of abusive tweets from real people, not just spam. Eventually, offer users the option to see only tweets from either (a) people they follow, (b) verified strangers, or (c) unverified strangers whom they’ve decided to manually whitelist. Then make that the default. That would still allow for the sort of serendipitous interactions among strangers that make for some of Twitter’s most delightful moments, such as the time Mario Batali and Gavin Rossdale chimed in to give a random Scranton, Pennsylvania, resident some cooking tips. It would also allow unverified users with real followings, such as parody accounts, to remain an integral part of the platform. But it would mostly tune out the voices of unverified users with small followings, who tend to be the ones wantonly spewing hate. You could still see them in your mentions if you really wanted to, but they’d be relegated to the Twitter equivalent of Facebook’s “message requests.”
Step 3: Take a more active and nuanced approach to reports of abuse and harassment by verified users. With the unverified trolls mostly contained, Twitter could develop more coherent policies for its handling of accounts such as those of professional alt-right provocateur Milo Yiannopoulos, which until now has been alarmingly ad hoc and seemingly arbitrary. After years of allowing Yiannopoulos and his followers to harass less well-known members of Twitter, the company permanently banned him in July after he targeted a celebrity whose complaints drew a personal response from Jack Dorsey. Twitter needs a team of thoughtful people who can follow a consistent process to evaluate claims of abuse from everyone, not just professional actors or the president of the United States. The good news is, when Twitter does decide action is merited against a verified account, it will actually be enforceable, unlike its fruitless bans against anonymous accounts whose owners can immediately reregister under a different email address and name.
No doubt my proposal has serious drawbacks and implementation hurdles of its own. Auerbach told me it reminds him of Google’s failed attempt to establish Google Plus as an identity service. “Privileging verified users will, I suspect, silence the underrepresented while excusing the celebrities,” he said.
From Twitter’s perspective, my plan would probably run afoul of its noble, if arguably misguided, hard-line commitment to anonymity and freewheeling speech—all without fully solving the abuse problem. It’s true that there’s great public value in a platform that allows almost anyone to be heard, even if others would like them silenced. Twitter’s importance to political dissidents, for example, is underscored every time an autocratic regime tries to censor it or shut it down.
Unfortunately, Twitter is not a public-benefit corporation. Since it decided to go public in 2013—a mistake, I believe, in retrospect—the company must answer to its shareholders, and they’ve made their top priority clear: growth. And not slow, steady growth, but rapid growth on a massive scale. They want Twitter to be more like Facebook.
I’ve argued for years that Twitter is fundamentally different from Facebook, and we should all root for it to stay that way. Yet, for all its shortcomings as a venue for discussion, Facebook’s platform is far less conducive than Twitter’s to public abuse from unaccountable trolls. At this juncture, Twitter simply has to find a way to become a little less hostile. If it can do that, at the very least the company will become palatable to corporate suitors while better serving the majority of its users. At the same time, its data will become more valuable to advertisers. And beyond that, who knows: It might even start growing again.