The cyberattack that led to the near-poisoning of the water supply in Oldsmar, Florida, is, at once, both the most terrifying and most reassuring type of hacking story—terrifying because of how severe the consequences almost were and reassuring because, ultimately, there were no consequences at all. On Monday, at a press conference, local officials announced that someone had infiltrated the computer systems that controlled Oldsmar’s water treatment plant and increased the level of lye, or sodium hydroxide, in the water by a factor of 100, a change that could have poisoned the city’s water supply had it actually been carried out. That’s the scary part and it should, of course, stop you cold—that someone could conceivably poison your drinking water from anywhere in the world without leaving their laptop.
The good news is that no extra lye was ever added to the city’s drinking water. According to the sheriff, mayor, and city manager who spoke at the press conference, the change in chemical levels was immediately detected and fixed by a city employee before it could go into effect. In fact, according to Pinellas County Sheriff Bob Gualtieri, a water treatment plant operator actually watched the intrusion occur in real time on Friday afternoon, when the perpetrator accessed the plant’s computer networks via the remote-access TeamViewer software and “took control of the mouse, directed it to the software that controls water treatment, worked inside it for three to five minutes and increased the amount of sodium hydroxide from 100 parts per million to 11,100 parts per million.” (It’s unclear exactly how the perpetrator got access to TeamViewer, whether via a software vulnerability or stolen credentials or some other fashion, though TeamViewer has said that it doesn’t have “any indication” that its program was compromised.) The water plant operator was then able to immediately revert the concentration back down to 100 parts per million. Other safeguards apparently in place at the time of the intrusion would have meant none of the changes would have gone into effect for more than a day after being entered, and, on top of that, the town now says it has disabled the system that allowed those levels to be changed remotely.
It can sometimes be tricky to discern exactly what types of lessons or messages we should take from these sorts of close calls. They’re a reminder of just how dependent we are on cyber infrastructure for all sorts of daily services and critical functions, and just how vulnerable that infrastructure is to electronic manipulation and intrusions. But, at the same time, they’re also a reminder that, for the most part, many of our safety mechanisms and protections for critical infrastructure often actually work pretty well.
“The public was never in danger,” Gualtieri said of the incident. That’s true in one sense—no one came close to actually drinking any extra lye—and also, of course, entirely untrue in another sense: All that stood between Oldsmar residents and dangerous chemical levels in their drinking water was a day or two and the careful observation of a diligent water treatment plant operator. In other words, depending on your perspective, you can look at the Oldsmar story and see a shocking failure to secure critical infrastructure and a frightening vulnerability in the public’s water supply, or you can see a rousing success story of fail-safe controls that prevented that vulnerability from being exploited.
Both of those things are true: Our critical infrastructure is profoundly vulnerable and also, to a large extent, relatively straightforward to protect. To be clear, I don’t mean that it is easy to protect computer systems from intrusions or prevent cybersecurity breaches—those are extremely difficult, even impossible, tasks. The computer networks for water treatment plants or power plants or any other type of critical infrastructure will never be perfectly secure. But just because it will always be possible to compromise those computers doesn’t mean that it has to be possible for those compromises to lead to severe physical or kinetic consequences. Certain types of changes to critical services should not be possible to make via remote access, as Oldsmar has discovered, or without manual, in-person approval and verification. Perhaps certain changes should simply be disallowed by a computer system entirely. (Why would a water treatment plant’s computers permit the level of lye to be set so high in the first place?)
We can’t detect or prevent every computer compromise, but we can monitor some of the key places where computer commands are translated into physical consequences—whether those consequences have to do with the amount of chemicals introduced to a water supply or the speed of centrifuges at a uranium enrichment facility. As long as people, not computers, are in charge of monitoring those changes and signing off on them, the Oldsmar incident suggests that we have a decent ability to mitigate some of the most devastating impacts of cyberattacks aimed at producing physical damage.
Inevitably, we won’t always get it right, and some of the compromises will be more sophisticated and harder to stop than the one that played out in Oldsmar. But for now, we can learn from both their failure and their success and take the story as a wake-up call and warning about the importance of securing cyber-physical systems with manual safeguards and controls, but also as a triumph of those safeguards to effectively confine the consequences of computer intrusions to cyberspace.
Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.