Future Tense

Regulating Bots on Social Media Is Easier Said Than Done

For one thing, defining bot is tricky.

A robotic hand holds a pen.
Photo illustration by Slate. Photos by Thinkstock.

A bot is an automated software program that does something. Beyond this rudimentary description, bots vary tremendously. They moderate chat room discussions, scrape the web to collect information, and provide customer service on websites. They also pose as real people on social media, where they can cause serious mischief. It is this last capability that has made bots a part of our common vernacular.

Both Congress and California are currently considering legislation that would require social media bots to disclose the fact that they’re automated. These bills are designed to respond to serious, well-founded concerns about the use of social media bots to spread misinformation and sow discord online, most infamously during the 2016 election season. It’s a well-intentioned idea, but the proposals face a common challenge in the regulation of new technology: defining the technology itself. While perhaps not the most exciting part of any legislation, the definitions section is critical—it tells us who will be subject to the requirements and prohibitions that follow. While both the federal and state bills have definitions sections, neither tells us precisely what they mean by “bot.”

Advertisement
Advertisement
Advertisement
Advertisement

Sen. Dianne Feinstein’s bill attempts to get around (or at least delay) definitional pitfalls by avoiding the word bot altogether. Instead, it applies to any “automated software program or process intended to impersonate or replicate human activity online” in the social media context. The bill then directs the Federal Trade Commission to define that term “broadly enough so that the definition is not limited to current technology.” If the bill becomes a law—which seems unlikely, given that it’s in the earliest stages of the legislative process—it remains to be seen how the FTC grapples with this challenge.

California’s bill initially defined bot as “an online account that is designed to mimic or behave like the account of a natural person” in early drafts. This definition, by its terms, would sweep in a human parodying another human while excepting a bot impersonating an organization such as the ACLU. Despite this troubling definition, the bill passed comfortably in the California Senate, though it remains to be seen whether this law will make it all the way to enactment after amendments in the state Assembly. The Assembly revised the state bill so as to define a bot as an “automated online account on an online platform that is designed to mimic or behave like the account of a person.” While this is a marked improvement, the new definition still fails to account for the variety of bots, including the gray area between bots and people. As Robert Gorwa and Douglas Guilbeault’s helpful typology of bots shows, a social media account could have automated components but retain some degree of human control, creating a sort of cyborg bot (a “cybot”). Cybots could, for instance, automatically share Instagram posts on other platforms such as Facebook and Twitter. They could also automatically follow or respond to other social media users who mention or follow them.

Advertisement
Advertisement
Advertisement
Advertisement

The prospect of a cybot creates real obstacles to effective enforcement of bot disclosure bills. Imagine a bill requiring the implementation of a familiar tool to identify bot accounts: a CAPTCHA (“I am not a robot”). At most this would slow the cybot down: A human could check that box for her several hundred cybot accounts with relative ease, then allow them to resume their automated activity. A definition of bot broad enough to cover such accounts would also apply to cybots.

How, then, should bot disclosure laws treat these hybrid accounts? Should we require a certain degree of automation in order for the disclosure requirement to apply? What metrics can be used to determine whether something qualifies as a bot for purposes of a disclosure requirement? And regardless of what boundaries we set, whose job will it be to investigate and determine whether a particular account is, in fact, a bot?

Advertisement

On the basis of certain account metrics—how often the account posts, what kinds of language it uses, how many other accounts it follows, etc.—platforms such as Instagram and Twitter have performed “bot purges,” deleting millions of bots in one fell swoop. If the government steps in, however, the rules would have to change: Some kind of appeals mechanism, through which an account wrongly labeled as a bot could petition to have its automated status removed, might be necessary in the name of fairness and due process. Ultimately, determining which accounts really are bots—however the term is defined—will likely be a more labor-intensive undertaking than legislators realize.

Advertisement
Advertisement

In addition to their wide range of applications and varying degrees of automation, bots vary significantly in terms of purpose and subject matter. Some bots are primarily commercial in nature, promoting products and services. Others are primarily political, expressing views on candidates and issues. Finally, there are many artistic, funny, and even useful bots, ranging from poetry bots like @accidental575 and @pentametron to Darius Kazemi’s hilarious @TwoHeadlines to natural disaster alert systems like @earthquakeBot. Some of these creative bots may utilize the unique ambiguity of the bot format to explore the boundaries between human and technology. The question for legislators is whether proposed disclosure requirements should apply to all such bots or only those that “speak” about particular subjects.

Advertisement
Advertisement
Advertisement

While several proponents call for a blanket disclosure requirement for all bots, the First Amendment—which prohibits unnecessary government interference with the right to free expression—generally disfavors broad speech regulation. Instead, laws must be narrowly tailored to address specific harms without sweeping in too much other speech. Noting this, California narrowed its disclosure bill to apply only to commercial bots and to bots intended to influence votes in elections. The federal bill, on the other hand, would require all bots to disclose their automated status. It is difficult to see how concerns about election interference rationalize disclosure requirements for creative bots that are unequivocally apolitical. And it is similarly difficult to envision how California will go about drawing boundaries between bots intended to influence votes in elections and bots that simply “speak” on current events and issues.

As legislators begin regulating social media bots, they should take note of the many ways in which bots vary. If they fail to account for the complexities of bots, they may pass laws at great risk of failure due to ambiguity and inefficacy.

Advertisement