Future Tense

Bot-Generated Comments on Government Proposals Could Be Useful Someday

Two humanoid robots sit back to back while one looks at a laptop and the other at a tablet.
Don’t kick bots out of the process of developing government regulations. Brett Jordan/Unsplash

When the Federal Communication Commission asked the public what it thought about its net neutrality rules in 2017, the comments flooded in—including millions submitted under fake names by bot-comment-generators. These missives added no value and raised concerns that people’s identities were being stolen. Now everyone from Congressional Republicans to the New York State Attorney General have their sights set on shutting down the bots. But anxiety about the risks of computer-generated comments might go too far. We don’t want to allow overblown fears to squelch the development of future killer apps that could improve public participation in regulatory decision making.

Advertisement

For decades, our federal rulemaking system has recognized the importance of giving the public an opportunity to comment on proposed regulatory changes. These days, government agencies typically post proposed rules and accept comments online. The age of electronic commenting has been a boon for public participation in many ways, but agencies also face some novel 21st century challenges.

Advertisement
Advertisement

We were part of a research team convened by the Administrative Conference of the United States—an independent federal agency that makes recommendations about the processes the government uses to make rules and other legal determinations—to examine how new technologies have affected the public comment process. The project comes after a turbulent few years in public commenting, when long-running debates about the proper role of public commenting ran straight into the politically charged net neutrality rule, resulting in a record number of about 22 million public comments, many of which turned out to be fabricated.

Advertisement

The net neutrality incident garnered attention from many in Washington and beyond. One FCC commissioner decried the process as “rotten,” Wall Street Journal reporters carried out a special investigation, a subcommittee of the U.S. Senate issued a staff report on “abuses” of the rulemaking process, the Government Accountability Office began a study at the request of legislators, the New York state attorney general issued a report about this “massive fraud,” and Last Week Tonight With John Oliver covered it closely. This high-level attention could make you think that this happens all the time and that it’s ruining our rulemaking system, as a Future Tense article argued.

Contrary to some of the rhetoric, our report finds that incidents like this are rare and amenable to targeted responses. It’s also important to note that agencies do not treat the public comment process like a vote. Rather than simply tallying up the yeses and the noes, agencies are supposed to examine comments for substantive feedback on their proposals. Ten million identical bot comments can be easily de-duped using simple software.

Advertisement
Advertisement

Nevertheless, to date, computer-generated comments don’t add much and have ranged from spammy junk mail to mass numbers of more realistic but mostly non-substantive comments submitted under fake names. If that was the limit of what we could expect from computer-generated comments, it would be reasonable to try to screen them out as unnecessary flotsam in the regulatory process.

But we can think of plenty of societally beneficial comments that software could generate. An automated system could gather up typos, broken links, and incorrect legal citations in proposed rules to send back to the agency. This kind of bot would help catch human errors and help make regulations more accurate. Language processing tools could analyze a proposed rule to determine its subject matter area, then crawl through related academic research to find and submit relevant scientific studies to the agency. This could help improve the government’s analytical basis for its policy choices.

Advertisement

Some people might want to opt-in to a bot that could identify rules that they might care about. This could work like a credit card fraud alert, which lets you know about dodgy-seeming purchases. Instead of alerting you to confirm whether you purchased gas in an unusual place, this system could alert you to proposed rules that are relevant to your life.

Advertisement

For example, let’s say you enjoy birding in national parks, and sometimes you post photos to social media with geotags. You could opt into a “RegScan” app that could use your social media information to infer that you might care about a proposed rule that would limit the hours at this park. If you have information about the public value of being able to observe particular birds at a specific time of day, you could share that with the government in a comment. The government might have missed this consequence of limiting park hours, and therefore your comment could shed new light on an important park use. Because it could help the government do a better job of balancing competing considerations, this kind of comment would be more informative and useful than an up-or-down vote.

Advertisement

Apart from the good these apps could do in any particular rulemaking proceeding, they could also help remedy the collective action challenges of getting people involved in rulemaking. People are busy, and the rulemaking process is complex. In practice, this means that most people are probably unaware of rulemaking, the public comment process, or how to participate in it. We should support anything that helps people participate in meaningful ways. And when people participate in the process by adding unique information and perspectives, that can help improve the final regulation before it goes out the door. Agencies pay special attention to comments with this kind of substantive value, which can’t just be filtered out as duplicates. While bots have a long way to go, if you squint a bit, it’s possible to see real potential.

On Thursday, the Administrative Conference voted on recommendations about how federal agencies should approach computer-generated and other forms of public comments. We think the Conference took the right approach by leaving a wide lane open for experimentation with computer-generated comments. While we might not have seen killer apps for public commenting yet, there’s every reason to think they could be on their way.

This essay expresses the authors’ views only and does not necessarily reflect those of the Administrative Conference, its members, or its staff.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.

Advertisement