The Biden Administration Needs to Do Something About Tesla

The electric carmaker’s approach to autonomous vehicles is far too risky.

A Tesla Model S P85D.
Myung J. Chun/Los Angeles Times via Getty Images

This article is part of the Future Agenda, a series from Future Tense in which experts suggest specific, forward-looking actions the new Biden administration should implement.

Advertisement

In October, Tesla offered some of its customers an upgrade to its “Autopilot” driver-assistance system called “Full Self-Driving.” Anyone familiar with how Tesla cars work knows that “Autopilot” isn’t really “autopilot,” and “Full Self-Driving” isn’t “full” either. For now, the feature allows a car to stay within lanes on a road, automatically brake in an emergency, turn, and respond to traffic signals on its own. But the company warns drivers to “not become complacent” because the vehicle “may do the wrong thing at the worst time.” Indeed, within days of FSD’s launch, a YouTube video showed a Tesla trying to drive itself into a parked car. Tesla called FSD “beta” to underscore that it was a work in progress.

Safety and automotive leaders condemned Tesla for exposing its customers—and everyone else who shares the roads—to unnecessary risk. PAVE, a nonprofit providing education about autonomous vehicles, blasted the company for “using untrained consumers to validate beta-level software on public roads,” calling this “dangerous and inconsistent with existing guidance and industry norms.” An association representing truckers warned that “while [FSD] may be a fun experiment for Tesla’s customers, public roads are our members’ workplace.”

Advertisement

Faced with an obvious safety hazard and seemingly false advertising, you might expect federal officials to step in. Nope. Instead, the National Highway Traffic Safety Administration promised merely to “monitor the new technology closely.” With an apparent regulatory green light, Tesla CEO Elon Musk claimed that FSD would be available nationwide by the end of this year.

This was not the first time NHTSA has shrugged off an urgent safety problem. The Trump administration’s regulatory hand-sitting has certainly included lax oversight of automotive technology. Earlier this year, the Government Accountability Office rebuked NHTSA for failing to revise vehicle-crash ratings to account for growing risk posed to vulnerable pedestrians (something European regulators have done for years). The Trump administration never even bothered to install a congressionally approved NHTSA administrator.

Advertisement

Tesla has capitalized on NHTSA’s inaction. From giving its software misleading names to ignoring recommendations from crash investigators to declining to install critical driver-monitoring technology, the company has shown a dispiriting willingness to cut corners. It’s not just industry watchdogs that are expressing alarm; Tesla’s actions could undermine public confidence in autonomous vehicle technology writ large—especially if Autopilot and FSD lead to more crashes. (Neither Tesla nor NHTSA responded to requests for comment.)

With the automotive industry investing billions in the development of autonomous systems, Tesla’s risky first-mover behavior sets a dangerous precedent. By moving quickly to rein in Tesla, the Biden administration can protect today’s road users while paving the way for safe development of autonomous technology throughout the industry.

Advertisement

A good place to start would be helping consumers understand what automotive tech can and cannot do. Today’s “advanced driving-assistance systems” include functionality like lane-keep assistance (keeping a car between the lines on the road) and automatic emergency braking that reduces the need for driver engagement in certain road conditions. Often seen as a step toward fully autonomous vehicles, ADAS nevertheless requires drivers to keep their eyes on the road and be ready to turn the steering wheel or apply the brakes if a problem arises.

That kind of vigilance doesn’t jibe with a term like “Autopilot,” which suggests a vehicle that can operate independently. Indeed, European regulators bluntly concluded that “Tesla’s system name Autopilot is inappropriate as it suggests full automation.” Musk disagrees; he recently called the idea of changing it “idiotic.” As misleading as “Autopilot” may be, the term “Full Self-Driving” seems even worse. “The real name should be anything but that,” said Duke engineering professor Missy Cummings, who has studied autonomous technologies extensively.

Advertisement

Other carmakers have adopted their own flashy names for their ADAS systems, including Cadillac’s Super Cruise and Ford’s Co-Pilot 360. Consumers are already confused. A study from AAA found that drivers assumed higher functionality in a fictitious ADAS product called “AutonoDrive” than in an identical one named “DriveAssist”; in the real world, names like “Autopilot” and “Full Self-Driving” could lead drivers to overestimate their systems’ capabilities and pay less attention while in the driver’s seat. But Trump officials have shown no interest in clarifying things.

There is one obvious way to address this come January: a Federal Trade Commission investigation that explores whether Autopilot and FSD constitute deceptive advertising. A precedent exists in Germany, where a court in Munich recently ruled that Autopilot’s marketing illegally exaggerated its functionality. NHTSA could also help by adopting standard definitions and minimum standards for ADAS features like lane-departure warning and automatic emergency steering, with regular updates that incorporate technological progress. Mark Rosekind, who served as NHTSA administrator under President Barack Obama, believes that government-backed definitions are critical: “When you buy a car with X, everyone knows what X is and that it’s not Y or Z. That boosts safety.”

Clarifying naming conventions and minimum standards is the easy part. A thornier problem is managing the actual safety risk posed by Autopilot and FSD.

Already, two Florida drivers have been killed when their Teslas drove underneath a turning tractor-trailer. The National Transportation Safety Board investigated these Tesla crashes (among others) and placed much of the blame on Autopilot, which failed to detect the obstacle ahead or ensure that the driver maintained focus on the road. Tesla has largely ignored recommendations from NTSB, which lacks enforcement authority. (In 2018, Musk literally hung up on the NTSB chairman.) NHTSA, for its part, has shrugged off NTSB’s recommendation that it launch its own investigation into Autopilot. “The question is how many more people are going to die before NHTSA agrees there is a defect in how Autopilot is set up,” says Cummings. NTSB seems to concur, concluding in a recent report that NHTSA’s “approach to the oversight of automated vehicles is misguided, because it essentially relies on waiting for problems to occur rather than addressing safety issues proactively.”

Advertisement

In a contentious hearing this February, NTSB members asked why Tesla had not designed Autopilot so that it could be used only in its intended highway environment (its so-called operational design domain, or ODD). Instead, Tesla has filled its owners manual with instructions about the conditions where Autopilot should or should not be activated while leaving the final decision to the driver. That approach seems flawed; Tesla has not shared information about how frequently drivers activate Autopilot outside the ODD, but it was a contributing factor in the two Florida crashes involving tractor-trailers (both occurred on highways with cross traffic, where Autopilot is not intended for use). “I don’t know how many people are operating these cars outside the ODD,” says Cummings. “It’s a matter of time before there is another catastrophic accident.” Again, NHTSA has shown no desire to investigate.

NTSB has also questioned the efficacy of Tesla’s driver monitoring system, which uses a driver’s grip of the steering wheel—known as a torque-monitoring system—as a proxy for her attention to the road. Such systems are easy to cheat, says Colin Barnden, an automotive analyst at Semicast Research. “I could fall asleep with one hand on the wheel. I could have one hand on the wheel and read a book or watch videos on my cellphone.” Indeed, videos have been posted online of Tesla drivers playing cards, taking a nap, or even leaving the driver’s seat entirely while Autopilot is running. In at least one fatal Tesla crash, there was evidence that the driver had been playing video games while Autopilot was active.

Much more effective are eye-tracking systems that can monitor the angle of a driver’s head and the movement of her face. Musk has brushed aside eye-tracking systems as being ineffective and expensive, but Consumer Reports found them to be far superior to torque-monitoring systems in a recent comparison.

The overall picture of Tesla’s attitude toward autonomous driving features is deeply troubling. First with Autopilot and now with FSD, Tesla has introduced ADAS packages with confusing names and inadequate driver monitoring, designed under a questionable assumption that vehicle manuals dictate driver behavior. Tesla seems to repeatedly sacrifice safety in order to gain a competitive edge. After all, the company can entice customers by exaggerating its systems’ functionality and eschewing Autopilot restrictions. And by relying on some of those customers to beta-test FSD, Tesla can collect valuable data from their trips, while other companies rely on trained safety drivers to test such features.

Advertisement

Although FTC investigations into Autopilot and FSD would help, the bulk of the responsibility for reining in Tesla falls on NHTSA. Under President Biden, the agency must take a more proactive, deliberate role regulating automotive technologies.

Most importantly, NHTSA should finally launch an investigation into Autopilot and FSD. The central question is whether the pattern of driver misuse of those technologies represents a defect, which could prompt a recall. The principle of “predictable abuse”—outlined in NHTSA guidance issued in 2016—provides a framework for such an investigation. Four years ago, Tesla could reasonably argue that warnings within its manual were enough to ensure Autopilot’s safe use, but that was before the system was implicated in numerous crashes, and before so many online videos showed Tesla drivers misusing it. “NHTSA has the authority to weigh in and pursue a remedy such as a recall on anything that causes an unreasonable risk,” says Paul Hemmersbaugh, former NHTSA chief counsel.

But a Biden administration shouldn’t stop with investigations into Tesla’s ADAS features. Historically, the United States has relied on automakers to self-certify their vehicles; a car or truck is legal on public roads as long as it complies with the extensive Federal Motor Vehicle Safety Standards. Only if a pattern of problems emerges will NHTSA launch an investigation. But those standards include no requirements about ADAS like Autopilot and FSD, leaving automakers to do whatever they like. Another example: Unlike many other automakers, Tesla has refused to incorporate laser-based sensor systems known as lidar into its ADAS packages, despite the technology’s potential safety advantages.

Again, European regulators could serve as a model. “In Europe the whole regulatory system works completely differently from the United States,” says Tom Gasser, an attorney at BASt, the German government’s transportation research institute. “We don’t have self-certification.” Before new automotive software can be shared with the public, an automaker must receive “type approval” from a European government, which confirms adherence to technical and safety requirements. Despite the fanfare of unveiling FSD in the United States, Gasser notes that Tesla hasn’t requested European approval. “Nobody I know believes Tesla will do that here,” he says—likely because the company fears rejection.

Advertisement

Requiring type approval for automotive technology would be a paradigm shift in the United States, even though it’s already the norm for another transportation mode: aviation. If an airplane manufacturer wants to adjust its software or hardware designs, it first needs the green light from the Federal Aviation Administration. Given the catastrophic consequences if airplane components don’t sync perfectly, that kind of permission-not-forgiveness regulatory approach makes sense.

As automobiles rely more on computers and sensors—and less on the human driver—regulatory preapproval feels more urgent. In the final months of the Obama administration, NHTSA issued an extensive Federal Automated Vehicles Policy report that cited premarket approval as a strategy worth exploring. But adopting it would be a heavy lift, requiring congressional support and significant new resources. “We already ask the agency to do far too much with far too few people,” says Hemmersbaugh. Automotive safety may seem like a nonpartisan issue, but Republicans’ aversion to regulation presents an obstacle.

Tesla has placed the rest of the automotive industry in something of a bind. Many carmakers’ executives view Tesla’s behavior as excessively risky, but they are wary of openly criticizing a company that has built a brand about being more innovative than the industry’s incumbents. John Bozzella, CEO of the Alliance for Automotive Innovation, a prominent industry association, said in a carefully worded statement that “automakers have an obligation to talk about not only the benefits of AVs but their limitations and how they should be used.” (Tesla is not a member of the group.)

Advertisement

When a prototype autonomous vehicle from Uber fatally struck Elaine Herzberg in Arizona in 2018, it cast a shadow over the entire industry. Some automotive executives worry that deaths involving FSD and Tesla could have a similar effect—especially if a person outside the vehicle is killed (in China an Autopilot-enabled Tesla hit a street sweeper truck in 2016, but only the Tesla driver died). That could spur calls on Capitol Hill for preapproval of automotive technologies, which automakers dread. “The current self-certification model should be preserved, as this framework has worked well in the U.S.,” said Bozzella.

That may be true, but we’re just starting to enter the brave new world of computer-assisted driving. Software and sensors are only becoming more important to automotive operations—and the federal government has done little to manage their development. Tesla is the first to push the regulatory envelope at the expense of safety, but unless the Biden administration intervenes, it’s unlikely to be the last. By bringing the hammer down now, federal officials can both save lives and ensure that transparency and safety are essential features of AV development in the critical years to come.


Update Consent
You must subscribe to access this content.
To continue viewing the content you love, please choose one of our subscriptions today.
Please register to access this content.
To continue viewing the content you love, please sign in or create a new account
You have used all of your free pageviews.
Please subscribe to access more content.
Access denied!