A cocktail of glaring gaps and irreproducible results in the field of industrial cybersecurity research is conspiring to leave critical infrastructure vulnerable to potentially “catastrophic attacks,” according to a systematic review of the computer science literature by cybersecurity researchers in the U.S. and Germany.
Led by security researcher Efrén López-Morales at Texas A&M University in Corpus Christi, an investigative team analyzed 133 research papers detailing industrial control system (ICS) cyberattack methods and defensive strategies published between 2007 and 2023. Collectively, the papers covered no less than 119 ways in which cyberattacks could sabotage an ICS, and 70 cyberdefense strategies for fighting them off.
Their analysis revealed, however, that many of the mooted cyberdefensive measures were built on sand: most had “little to no” research proving their effectiveness, said López-Morales, and of the defensive research he and his team were able to find, there was often no data repository provided—containing, for instance, the network intrusion test code used—which would have allowed other research teams to attempt to check or replicate results by other routes, as adherence to rigorous scientific methods would ordinarily demand.
As a result, the researchers say, critical infrastructure in essential businesses, spanning energy generation, manufacturing, water treatment, gas pipelines, and food production, remains at risk of debilitating cyberattacks. The news comes as U.S. energy utilities alone faced a 70% increase in cyber intrusions between January and August 2024, compared to the same period in 2023. Though none of the 1,162 attacks were crippling, experts fear it could only be a matter of time before one has serious effects on the energy supply.
In that context, the Texas A&M team’s research (or lack of research) found one salient risk is the number of unexplored vulnerabilities in many of the merchant-market programmable logic controllers (PLCs) used in industrial control systems to automatically operate machinery. Comprised of small, ruggedized computers running a real-time operating system, PLCs use a combination of software, logic hardware, and firmware to control the actuators, valves, and motors of what can sometimes be massive machines, based on inputs from timers and arrays of sensors measuring parameters like pressure, temperature, and vibrations.
López-Morales and his colleagues from the CISPA Helmholtz Center for Information Security in Saarbrucken, Germany, and from the University of California at Santa Cruz, found that attacks and defenses had generally been evaluated on only “a small subset of the PLCs available on the world market.”
As the team put it in their paper, which was presented at the 33rd Usenix Security Symposium last fall: “The market share percentage of important PLC makers such as Mitsubishi, Omron, ABB, and GE is considerably higher than their research share—the number of papers explicitly using them for attack and defense evaluation purposes. For example, even though Mitsubishi’s PLCs account for 14% of the global market, they contribute to 0% of the research share in our review.”
In other words, the researchers said, attack formats detailed in the research literature variously targeting PLC communications modules, operating systems, control logic, memory, CPUs, firmware, plus the notion of using the input/output lines as covert poisoning or exfiltration channels, may have gone unchecked for many of the commercial devices on the market.
It was not supposed to be this way; at least, not since the summer of 2010. The reason? That was the year the ICS/PLC sector got the mother of all wake-up calls, when the vulnerability of PLCs was demonstrated for all to see by the Stuxnet computer worm, an infectious and ferocious piece of malware thought to have been developed by U.S. and Israeli intelligence services.
Using four previously unknown (also called ‘zero-day’) Windows vulnerabilities, Stuxnet percolated down to its target: a type of PLC made by Siemens of Germany that Iranian engineers were using to drive uranium-enriching gas centrifuges. The malware forced the centrifuge drive motors to intermittently speed up and then slow down, inducing out-of-bounds accelerations and decelerations that successfully shattered hundreds of the machines at Iran’s nuclear facility in Natanz.
But if Stuxnet’s success was such a wake-up call regarding PLC attack implications, why did the Texas A&M-led research find the ICS/PLC sector is in a dire situation where research gaps and a lack of reproducible research prevail? López-Morales gives the sector its due and said it has indeed made major efforts to improve PLC security since Stuxnet, adding that the enabling technology in industry has changed to more-connected versions that are more inviting of attacks.
“ICS/PLC security has gotten better since Stuxnet,” said López-Morales. “In the last few years, ICS companies have made important efforts to form meaningful collaborations with ICS security researchers. The data reveals that the number of PLC security research papers steadily goes up, starting in 2011, and Stuxnet was discovered one year before, in 2010. I do not think that this is a coincidence.”
Specifically, they found there were no peer-reviewed PLC security papers published in the literature in 2010, while 20 papers were published over the next decade before declining again, perhaps because Covid-19 lockdowns affected industrial security research activity.
López-Morales cited another important factor. “In 2010 when Stuxnet was discovered, ICS and PLCs were, in general, not connected to any computer network, such as the Internet. In fact, the PLC that Stuxnet targeted, the Siemens S7-300, did not include an Ethernet port, which means it was not possible to connect it to a computer network even if you wanted to.”
[Stuxnet was dropped in cafes and leisure centers near Natanz on USB thumb drives, in the hope social engineering would see it delivered by employees. It was.]
He explained that “modern PLCs are designed to be connected to a network and have advanced network features. When a PLC or any computer system is connected to a network, the opportunities for it to be attacked increase exponentially. This means that the interconnected world of today’s PLCs is much more dangerous than the isolated world when Stuxnet happened. But ICS researchers have risen to the challenge and have developed new ways to protect ICS and PLCs.”
Another structural change the Texas A&M research identified was the move from always using ‘HardPLCs’—legacy units based on dedicated, proprietary hardware—to cheaper ‘SoftPLCs’, which emulate the former but are based on Windows and Linux platforms. In a sector lacking shared security research, it might seem a move likely to increase critical infrastructure’s attack surface, as more cyberattackers know how to game those operating systems.
However, that is not necessarily the case. “There is also a big advantage to SoftPLCs,” said López-Morales. “All the advanced security mitigation measures that have been developed for modern operating systems over the years are now available to SoftPLCs. For example, Address Space Layout Randomization (ASLR) is a security measure that prevents exploitation of memory corruption vulnerabilities, and it is used by most modern operating systems, but not by many legacy PLCs because of hardware limitations.”
The researchers found a dearth of measures for securing SoftPLCs, and the team recommended as a result that industry develop defense methods that can secure both hard and soft varieties, and also measures for SoftPLCs alone that address their potentially risky use of features like virtualization and cloud integration.
What do security professionals make of Texas A&M’s findings, particularly the lack of code repository sharing behind the reproducibility crisis? Said Ruimin Sun at the Knight Foundation School of Computing at Florida International University in Miami, who studies control logic modification attacks on PLCs, “My take is that proprietary PLC hardware commonly limits data and code sharing due to the sheer amount of code preparation effort and motivation required to do so.”
Sun notes that the move to open source software in some ICS systems is making it easier to get hold of at least some of the test code required. She said an annual ACM workshop she co-chairs, called Re-design Industrial Control Systems with Security (RICSS), “saw more papers with open source code in 2024 than the 2023 event; mostly hardware-agnostic machine learning-driven code for anomaly detection.”
Regarding the lack of research on SoftPLCs, Sun said it is more of a problem for academics right now. “Although SoftPLCs are more cost-effective, they are more prevalent in academic testing environments.” She added that “PLC hardware is still considered more robust and reliable, and since PLCs are controlling critical infrastructure, the industry is still more confident using them.”
Sun’s co-chair at RICSS, Mu Zhang of the Kahlert School of Computing at the University of Utah in Salt Lake City, said the sharing of ICS security research is “challenging” for two reasons. “First, ICS environments are traditionally closed systems, often comprising proprietary hardware and software components, which makes it difficult to publicly release research findings without inadvertently disclosing intellectual property.
“Second, ICS platforms vary significantly from one another, making it difficult to establish a generalizable method for disseminating research results. Take machine learning-based anomaly detection as an example: existing detection models are typically developed and evaluated on specific testbeds using manually crafted attacks designed for those systems.
“Consequently, reproducing these attacks and verifying the effectiveness of an anomaly detector across different environments—such as applying a model designed for a water treatment system to a manufacturing testbed—is extremely difficult, if not impossible.”
It’s tough, too, Zhang said, for academic researchers who want to probe the efficacy of security measures to gain access to emerging SoftPLC-based ICS testbeds, “as these systems are typically designed and managed by industry experts from companies like Rockwell and Siemens, or system integrators.”
As a result, Zhang said, “Academics not only struggle to obtain access to these platforms, but also face significant constraints in modifying them for research purposes.”
Such struggles highlight that a fine balance needs to be struck between avoiding revealing critical infrastructure information that could be useful to an attacker, and verifying, via the scientific method, that results are reproducible and actually work in practice.
As cybersecurity is part of the broad discipline of computer science, its adherents ought to be adopting a more scientific approach, because the public will not be forgiving if they do so only after disaster strikes.
Paul Marks is a technology, aviation, and spaceflight journalist, writer, and editor based in London, U.K.