On Monday, the National Highway Traffic Safety Administration, or NHTSA, released a new list of “best practices” for cybersecurity in vehicles, designed to prod carmakers toward securing cars from hackers.
Those include corporate changes, such as hiring high-level cybersecurity managers and embracing greater openness about flaws and vulnerabilities. More interestingly, the guide includes some technical suggestions for Detroit, such as:
- Everything the car does should be logged so that the method and consequence of a breach is recorded.
- A key or password obtained from open access to one vehicle’s computer should not provide access to multiple vehicles.
- Limit or eliminate post-factory access to the software in “engine control units,” or ECUs. The report says that “physically hiding connectors, traces, or pins intended for developer bugging access should not be considered a sufficient form of protection.”
Does that all sound obvious? It hasn’t been to carmakers. In August, security researchers were able to use a single Volkswagen to extract a cryptographic key that could unlock millions of its peers.
What the guide does not say is why anyone would want to hack a car. It’s true that cars are now being stolen with computers, not coat hangers, though grand theft auto (down 62 percent in the last two decades) is a much less popular hustle than it used to be and will be less appealing when every car requires GPS to function.
Instead, the primary hacking threat is perceived to be to our physical safety. The quintessential car-hack of our times occurred in 2015, when a pair of St. Louis, cybersecurity researchers remotely shut down the transmission of a Jeep Cherokee as a Wired reporter drove it down Interstate 64. The vulnerability prompted the NHTSA to recall 1.4 million cars and gets brought up every time anyone talks about the electronic architecture of new vehicles. The stunt set the tone for the way we think about car-hacking: as a malicious and dangerous act.
One section of the NHTSA “Best Practices” report describes, for example, what might happen if spoofed messages—that is, communications that appeared to be legit but actually come from a third party—came into a vehicle’s engine control unit. A spoofed message could inappropriately trigger a car’s brakes, for example, or cause a traffic control system to think a moving car was stationary. External devices could be used “as proxy to influence the safety-critical system behavior on vehicles.”
Those risks are real. But to some extent they are the futuristic equivalent of cutting a car’s brakes, or throwing rocks from a highway overpass—bad news, potentially deadly, but hard to monetize. As a colleague who writes about cybersecurity put it to me, “There’s no money in killing people.” (For the most part, anyway.)
That doesn’t mean carmakers shouldn’t secure vehicles from those attacks. But the more likely threat is to financial security and identity theft. Because after all, if your car is loaded up with your credit card information and social security number, it’s just one more computer in your life.