The modern automobile can be traced back more than 100 years to 1886, when Karl Benz created the Benz Patent-Motorwagen. It consisted of a small open-air carriage made of wood that was supported by delicate steel-spoke wheels, and it allowed drivers to travel at hair-raising speeds of slightly more than 10 mph. It’s easy to chuckle at that today, when our cars are made from a variety of composite materials that were likely developed by aerospace engineers and can easily reach speeds in excess of 100 mph. But the primary goal of the automobile hasn’t changed: to move us from point A to B. A second goal is to provide mobility in a way that does not harm people and will not result in property damage. Simply stated, we want to drive safe.
Human-machine interface researchers have long recognized that mobility and safety can only be achieved through a partnership between driver and vehicle. The partnership can be thought of as a dance in which each partner must complete at least some tasks—you tap the brakes when you see a pedestrian and the car slows down.
Traditionally, achieving these goals of safety and mobility has largely been your responsibility as a driver. You make sure that you are well-rested and sober, you check the fuel level so you don’t run out of gas, and make sure that the car is operational so it doesn’t break down. While driving, you provide all the vehicle control inputs: You decide the steering wheel, accelerator pedal , and brake pedal positions. And of course you have to stay constantly aware of the vehicle’s operation and driving situation, and coordinate your responses accordingly. Most drivers are great dance partners because they constantly monitor and attend to what’s going on.
The automobile, on the other hand, has long been a terrible dance partner. Historically, the car hasn’t been able to monitor itself (besides, say, offering an infuriatingly vague “check engine” light), the driver, or the environment—and it certainly hasn’t been able to use this information in a partnership with you to achieve greater mobility and safety. It’s like your little cousin standing on your feet while you boogie: You’re the one doing all the work, but it still looks like a dance.
Regardless of the awkward imbalance, we nevertheless manage to make this partnership work. The Federal Highways Administration estimated that Americans alone drove a staggering 3.1 trillion miles in 2015, which, by any standards, suggests a highly mobile society. But what about the other goal: How safe were we? Well, that depends on your perspective. As a researcher in the field of transportation safety, I would say that we sacrificed safety for mobility. According to the National Highway Traffic Safety Administration, in 2015 in the U.S. approximately 35,000 people died and more than 2.4 million people were injured in automobile crashes.
So we need a different partnership model, one that can help us maintain mobility and also improve safety. Lots of people think that self-driving cars are the answer: The driver hops in a vehicle, indicates a destination, and then does, well … whatever he or she likes, because the automobile completes all necessary tasks to achieve mobility while also monitoring and attending to safety. Suddenly, the situation we’ve been in for more than a century is flipped, and drivers now represent the little cousin standing on the feet of the automobile.
The research community is currently debating how autonomous vehicles will influence mobility. Some researchers speculate that our mobility might be increased, particularly for certain groups, including seniors, who can significantly extend their driving years, while others suggest that we will take the same number of trips regardless of whether our cars are automated. But there are great hopes that safety will be improved significantly by minimizing the driver’s role. And a good thing, too: Research suggests that the driver is implicated in 94 percent of crashes.
But flipping that situation will take some time. Over the next several decades, until autonomous vehicles have achieved significant market penetration rates, we’ll see a mixed model in which drivers and their vehicles share mobility and safety responsibilities more equitably. This partnership will be cultivated by continuous technological advances that allow vehicles to obtain increasingly larger amounts of information about a driver, to monitor and assess the driver and the surrounding world, and to act on the information in the best interest of the partnership.
Take, for example, the Mercedes-Benz Attention Assist system, which monitors an array of driving parameters. When it detects drowsiness, it warns the driver of the situation. In another example, my colleagues in the human physiology domain have successfully used real-time facial temperature profiles to determine whether a driver is stressed due to cognitively or physically demanding tasks. Then, the vehicle can provide a response to calm the driver. That light on your dash that used to indicate a problem with your vehicle now indicates when there is a problem with you. If your car calculates that your stress level is too high, it can turn on some soothing Beethoven and change the interior lights to a relaxing blue hue. And that’s just using temperature information. I look forward to the day when vehicles care as much about my safety as I do—that’s a true partnership, one that will drive down fatality and serious injury rates.
This sounds wonderful, right? The tricky part is to create the opportunity for the partnership and convince humans to participate.
The vehicle-based technology landscape is littered with devices and systems that failed because they didn’t understand how the human-vehicle partnership should work. Like any partnership, one between vehicle-based technologies and drivers can be fraught with significant challenges. What happens when there is conflict between a driver’s goal and a vehicle’s goal? How will a driver feel when the car overrides his or her decision to steer left because it thinks the best way to reduce the collision consequences is to steer in the opposite direction? What about situations in which drivers may get confused with the basic operation of the technology?
My colleagues and I recently tested a system that could offer warnings about possible crashes long before a driver even saw the possible collision unfolding in front of them. As an example, imagine you are driving in an urban area with many high-rise buildings that block your view of approaching traffic on an intersection side road. You have the green light, but the system presents a warning sound and crash icon indicating a vehicle is about to run its red. Yet everything appears fine to you. Somehow, the approaching motorcycle stops at the last second. Our research indicated that in this case, many drivers felt the warning system provided a false alarm—after all, they didn’t see anything occur.
Most challenges associated with these partnerships can be traced back to poor communication between the partners and/or a lack of trust by drivers. It is easy to imagine that a system that provides a perceived false alarm could also instill a significant amount of mistrust in drivers. No one wants a car that cries wolf. Furthermore, researchers have found that drivers can become less accepting of the system when these challenges are significant and unresolved. In fact, drivers’ dislike for these technologies can be so great they simply turn it off, thus defeating any possible safety benefits.
So how can we convince drivers to see themselves as part of a partnership, rather than the car’s boss? That is an awfully big question. The current design theory is, of course, to create “usable” interfaces, ones that drivers instantly understand and that don’t elicit negative feelings. As an example, research into intersection collision warning systems, like the one described above, suggests the need to let drivers know that an event occurred but may not have been seen. However, no human-machine interface is perfect.
To complicate the partnership further, it is increasingly clear that the balancing point between the primary goals of mobility/safety and usable interfaces may not be static. For example, if a system determines a driver is easily distracted on a particular day, the system interface may need to provide warnings that are louder or more sustained, making them more difficult to ignore. And there are other factors such as your level of patience, system functionality/malfunction, or environmental factors like poor weather. There are also long-term changes, such as driver age, which may change our ability to see interfaces, and new driving environments. These things complicate our ability to create even better usable human machine interfaces.
The bad news is that there is no single best solution. The good news is that designers have a genuine desire to improve mobility and safety and to do so while providing the most usable interfaces possible. A few dance partners (the human ones) will get their feet stepped on and others will walk away entirely throughout this ever developing partnership. However, ultimately, vehicles will become better dance partners—and we will all benefit.
This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, follow us on Twitter and sign up for our weekly newsletter.