Beware More Closed-Off Conversations on Twitter

Five features Twitter is considering for 2020 would give users more control—but one is particularly risky.

The Twitter bird wearing a construction hat
Photo illustration by Slate. Photo by Getty Images Plus.

Twitter users have hated basically every major change to the platform the company has announced. Its July 2019 redesign was lambasted (“It’s like someone going in your house and rearranging the furniture,” tweeted one user), as were its 2017 decisions to double the length of tweets and allow users to @-mention up to 50 people in a post, its 2015 addition of the quote-tweet function, and the small blue dot that appeared in its Moments logo.

That changed when the company recently announced that it would stop running political ads. (Slate’s own Aaron Mak pointed out that founder Jack Dorsey’s statement on the decision was an “obvious subtweet of Zuckerberg’s position.”) Some users noted that this marked a new milestone for Twitter: a sitewide change that wasn’t immediately universally booed. Perhaps Twitter can keep that streak going. Earlier this week, Dantley Davis, the company’s vice president of design and research, tweeted a list of five features he was “looking forward to in 2020.” Of course, none of these is a guarantee, but given Davis’ position in the company, it seems plausible that these could actually be rolled out—or at the very least, they reveal the new directions the company’s leaders are exploring. “We can confirm those are ideas the design team is exploring in 2020,” a Twitter spokesperson told me, noting that the team always does this kind of future vision work, though not all ideas come to fruition. As Twitter continues to combat disinformation and harassment on its platform, it’s worth considering how these new features could be a boon in that fight—or a liability.

The dominant theme among these new features is giving users more control over what they see on the site, and how others see their Twitter activity. For instance, Davis proposes the option to prevent other users from retweeting you or tagging your username, allowing users to keep a lower profile on the site without making their entire account private. Davis also mentions a feature that would allow you to remove yourself from conversations, which would be a godsend for anyone who’s found herself tagged into an endless reply war.

The last feature in Davis’ list would be the biggest change in the way Twitter operates, and presents the most potential for abuse: the ability to tweet only to a specific hashtag, interest, or group of users. Currently, tweets from all Twitter users with public accounts are viewable on their account pages and appear in their followers’ timelines (algorithm willing). This “tweet only to” feature could give users the option to direct tweets to a select group of people or a hashtag.

There are situations in which this could be useful. For instance, I think most of my Twitter followers are primarily looking for science news stories, not my opinions on, say, the Japanese reality TV show Terrace House. A “tweet only to” feature would allow me to relegate my tweets about it to a hashtag like #TerraceHouse to avoid spamming my whole timeline, or I could set those Terrace House tweets to be visible only to other users I know to be fans.

But this feature could also be dangerous. Allowing users to create semiprivate discussions on the site by tagging specific users or setting tweets to only appear under certain hashtags would be Twitter’s version of an invite-only, private Facebook group. And while private Facebook groups have been a boon for genuine connection and community, BuzzFeed reports that they’re also often “a global honeypot of spam, fake news, conspiracies, health misinformation, harassment, hacking, trolling, scams, and other threats to users.” This kind of privacy on Twitter would be a double-edged sword: It lends itself to a more intimate, customizable user experience, but it could also make it harder for users to spot bad behavior and take steps to combat it.

If you’ve been harassed on Twitter, you might easily imagine how a “tweet only to: hashtag, interest, or these friends” options could be leveraged to organize a campaign against a specific user, or even a group of users. Say you wanted to get a bunch of people to tweet, “Hmm, I am not sure about these new features” at Twitter’s official account. Currently, you have two options on the site to spread the word. One, you could expend a great deal of time and effort trying to privately message people to join your campaign, but there’s no telling whether those messages would actually go through. (You can send direct messages to anyone who follows you, but if you’re messaging folks who don’t follow you, they may or may not have selected the option to receive direct messages from strangers.) Or two, you could write a tweet encouraging others to join your campaign, tag up to 50 people in that conversation, and hope that other people follow suit. But everything you write would be out in the open, visible to anyone on the platform, so it’s possible that the target of this proposed campaign would be able to take action to preempt that harassment, like reporting the instigating accounts or temporarily putting their account on lockdown.

The ability to tweet only to a specific group or hashtag would open up a third possibility: These campaigns could be more easily organized while hidden away from users who might otherwise report those accounts or alert potential targets of harassment about what’s to come. Users could also create back-channel conversations where conspiracy theories and incorrect facts easily propagate. For better or worse, Twitter’s current features allow users to see all other public tweets, and many users take the opportunity to “dunk on” other people’s bad or misguided ideas. Imagine if, instead, those ideas were siloed to specific hashtags or groups of followers—it’d be even creating an echo chamber inside a platform that’s already known for being an echo chamber.

While these new features could definitely improve users’ ability to filter out harassment and disinformation from their timelines, it’s a Band-Aid that doesn’t address the root of the problem: stopping the sources of that abuse. That’s a more difficult and far-reaching problem to solve. As recently as this summer, Twitter has announced new policies to combat “hateful conduct,” like cracking down on hate speech against religious groups and aiming for more consistent enforcement of its own terms of service.

Meanwhile, if Davis’ list is any indication, sounds like we’ll have to wait a while longer before Twitter considers users’ ever-popular request to edit their own tweets.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.