Is it possible that Stuxnet, an exceptionally sophisticated piece of malware targeting industrial equipment, could have been thwarted — at least partially — by software-free, hardwired vibration sensors?
I recently put that question to Andy Bochman, senior grid strategist, national and homeland security at Idaho National Laboratory (INL). I had just read his recent article, “Internet Insecurity,” which is the current cover story in the Harvard Business Review. The article recommends, among other things, adopting analog and low-tech systems — not as a replacement for digital ones — but to function as a barricade when hackers compromise vital systems.
“Your intuition is right on,” Bochman said, referring to the Stuxnet question. “That’s the crux of the whole paper.”
I breathed a sigh of relief, because, coming to a fundamental understanding of cybersecurity in general and Stuxnet in particular can be elusive. And I have never come across anyone stating that a low-tech cyber-defense strategy could mitigate one of the malware’s central attack vectors.
Why Stuxnet Still Matters
The Stuxnet example continues to serve as a touchstone for advanced IoT-based cyberattacks. In essence, Stuxnet was a piece of malware reportedly developed circa 2005 by the U.S. and Israeli governments to sabotage Iran’s nuclear program and was deployed about five years later. The worm made its way onto a closed network within the Natanz nuclear facility in Iran, likely deployed via a USB stick. From there, it sabotaged code on the programmable logic controllers used to control the spin of the centrifuges. The malware would cause the centrifuges to spin out of control for a short duration and then return to normal behavior for a number of weeks. The process repeated until approximately one-fifth of the centrifuges were destroyed.
Some eight years later after Stuxnet hit, the malware continues to hold several lessons for industrial organizations writ large. One, nation-states continue to ramp up cyberattacks on enterprise, industrial and rival-government-based targets, including organizations that manage critical infrastructure — with increasingly sophisticated cyberattacks. Last year’s WannaCry (reportedly developed by North Korea) and NotPetya (allegedly created by Russia) cyberattacks are examples of that.
Recently, a piece of malware known as Triton or Trisis attacked a petrochemical plant in Saudi Arabia. The malware was designed to hijack the company’s operations and cause an explosion. The Washington Post cited the attack as an example of “malware that can kill.” According to The New York Times, the attack only failed because of an error in the code. Attacks like Triton and Stuxnet remind us that, in the 21st century, malware can cause physical damage to equipment and occasionally safety problems. Such malware also can wreak havoc on organizations that thought they had a strong cyber defense and had “air-gapped” segregated critical networks from the outside world.
And yet, Bochman is different from many cybersecurity experts in that he isn’t primarily focused on what he calls “cyber-hygiene,” which includes everything from cybersecurity staffing and services. Other examples include employee education, maintaining inventories of connected products, as well as the use of firewalls, honeypots and intrusion-detection systems. While all of these items have value, each has shortcomings, leaving organizations with the challenge of “trying to figure out what they should spend money on for biggest bang for the buck,” Bochman said. “Just put yourself in the shoes of the chief financial officer whose job is to understand whether what they see the chief security officer is asking for is this is the best stuff.” Regulators such as public utility commissions who have the job to oversee how electric utilities spend their money are in a similar position. “It’s really tough,” Bochman acknowledged. “That’s all a part of cyber hygiene, which applies broadly across the enterprise.”
Engineering Old-School Cyber Defenses
Most organizations respond to abstract-seeming cyber threats by doing more of the same — or doing more of the same with a slightly bigger budget each year.
While acknowledging the importance of good cyber hygiene, INL recommends a different approach, which it terms the “consequence-driven, cyber-informed engineering” methodology.
The first step of CCE is to “identify certain parts of an operation that really cannot be allowed to fail for any reason,” Bochman said. As the article puts it, this task includes “identifying functions or processes whose failure would be so damaging that it would threaten the company’s very survival.” Examples of attacks on such “crown jewel” processes could be the sabotage of an electric utility’s transformers that grinds distribution to a halt. Or an attack on an oil refinery or chemical facility that triggers an explosion whose aftermath could injure or kill hundreds or thousands of people. This step, and indeed the entire process, is performed under the guidance of a CCE master, which could be an individual from INL or, in the future, a trained individual from an engineering service firm.
The next step is to create maps of the organization’s digital terrain, including all the people and processes (and third parties) that touch the critical operation, along with all the hardware, software and communications technologies they use.
The phase that follows is identifying the most likely attack strategies to breach the crown jewel processes, ranking them by difficulty.
The final step, risk and mitigation, differs from other cybersecurity strategies in its embrace of analog technologies and basic engineering principles. “The departure is that, in some cases, for very low dollars, and in some cases for no dollars, you can protect yourself in a way you probably haven't even hadn't really thought of before,” Bochman said. Examples include the analog vibration sensor that is configured to trigger a defense mechanism when a piece of sensitive machinery has received instructions to damage or destroy itself — as the nuclear centrifuges did in the Stuxnet example. Another strategy could be to keep a backup system that can take over in the event of a cyberattack on critical systems. Many of such strategies may look, on paper, like technological steps backward, which include curbing or reducing the use of connected technology and enlisting trusted humans to manage essential functions rather than automation.
Ultimately, one of the unique aspects of the INL system is its insistence that the key to enhanced cybersecurity could be the engineers, technicians and other technical staff that keep heavy machinery up and running. “If you told those operators and engineers that they possess the keys to better cyber-defend their most critical systems, that can create a really interesting dynamic,” Bochman said. “Many or maybe almost all of them never thought of that before.” While INL’s methodology also prioritizes the buy-in from upper- and middle-management, it is unique in how it empowers machine operators and engineers. It makes such tech workers “hackers” in the early sense of the world: “‘working on a tech problem in a different, presumably more creative way than what’s outlined in an instruction manual,” as a 2014 New Yorker put it. “In our first pilot, it was the people who are closest to some of the most important and sometimes dangerous processes who came up with the mitigation before the INL people had gotten to that point,” Bochman said.