Future Tense

Too Often, We Don’t Regulate New Technologies Until Somebody Dies

An Uber self-driving car on the streets of San Francisco on March 28, 2017.
An Uber self-driving car on the streets of San Francisco on March 28, 2017. Justin Sullivan/Getty Images

I rode in my first self-driving car in the summer of 1997, as part of a demonstration to display the technology in “the real world” on a stretch of Interstate 15 in San Diego. The organizers took great pains to carefully regulate the separate HOV lanes of the highway to ensure that there were barriers preventing all other cars—and pedestrians—from interfering. Everyone involved knew there was a significant amount of work to get from that demonstration to having self-driving cars safely navigate normal city streets.

In the 20 years since, I’ve continued to study automated vehicles, particularly their history, and the technology has continued to develop. Carnegie Mellon engineers did groundbreaking research on vehicle-size robotic technology, and the U.S. military’s Defense Advanced Research Projects Agency funded competitions that challenged inventors to create automated cars that could navigate desolate stretches of desert. But it is a reality of technology that there is only so much testing one can do in a lab, through computer modeling, or in carefully controlled spaces. The real world presents complexities—including unpredictable factors such as human beings—that can be addressed only by releasing technologies into the wild. We never quite know how a technology will work until we put it in the hands of normal people and see what happens. Ultimately, average people play the guinea pigs in the last round of testing of most new technologies.

I again became a self-driving car guinea pig about a year ago when Uber, Waymo, and others began to put dozens of their autonomous vehicles in my neighborhood and around my university. Until this week, I was used to seeing up to a half-dozen Uber SUVs trailing each other on my way to school and a few other automated car companies using my very neighborhood as a technical challenge because it has no sidewalks or lane markings. Every day, as I go to and from work, I drive past the exact spot in Tempe, Arizona, where Elaine Herzberg was struck and killed by a self-driving Uber car on Sunday night.

It is not at all uncommon that we unwittingly have technologies tested on us. In fact, it is sort of an American tradition. We pride ourselves on being a people who innovate and take risks. And in many ways, whether we like it or not, we are all affected by that mindset. Our government’s regulatory framework is largely tentative. Innovation is the goose that lays the golden eggs upon which our economy, quality of living, and happiness are built. There is a great fear in the halls of government that if we create guidance for our inventors, we will choke off their creativity. Thus for the most part technologies are innocent until proven unsafe. That is, we rarely regulate until someone dies.

The one field in which we do have clearly defined limits is medicine. Our country is a bit intimidated by the immediate and possibly deadly effects that untested pharmaceuticals could have on us. And as such, our government requires medicine to be proven safe (or at least beneficial enough to warrant the side effects) before it may be used on normal patients.

As long as the process is followed, then, medicines are not unwittingly tested on just anybody. Medical ethicists have successfully argued that people should have informed consent before they are experimented upon. Thus we have created a process by which those involved are told what they are getting into before they agree to be a part of the process. To limit the coercive powers of the experimenters, we put in additional safeguards—for instance, generally not allowing them to test on children or inmates. But most technological development does not require that those being experimented upon have informed consent.

The tragedy that befell Elaine Herzberg Sunday night should make us pause and ask whether there are more areas of technology where those being experimented upon should have informed consent. Uber had not reached out to the citizens of Arizona to let them know about the technology they were testing. They hadn’t engaged in a discussion about the benefits, let alone the risks. Many of us were intrigued by the daily sight of these vehicles with the strange array of sensors on their roofs. But few of us, even those who study the technology, had access to Uber’s rationale for the experiment we were part of. We weren’t given the ability to know how or why we were being experimented upon or any chance for feedback. Elaine Herzberg was ultimately a casualty of progress, but it could have happened to any of us.