Digital twins’ promotion from factory floor to cybersecurity
Cybersecurity may be about to get caught up in the digital twin revolution currently sweeping automotive, health care, aerospace and other industries worldwide, according to a new study.
The increasing accessibility of robots and other manufacturing equipment through remote connections has created new vulnerabilities for malicious cyberattacks. To combat the growing cyber threat, a team of researchers from the National Institute of Standards and Technology (NIST) and the University of Michigan has developed a cybersecurity framework incorporating digital twin technology, machine learning, and human expertise to identify signs of cyberattacks.
In a recently published paper in IEEE Transactions on Automation Science and Engineering, the NIST and University of Michigan researchers demonstrated the effectiveness of their approach by detecting cyberattacks targeted at a 3D printer in their laboratory. The researchers also suggest that the framework can be extended to a wide range of manufacturing technologies.
Detecting cyberattacks can be challenging, as they are often subtle and hard to distinguish from other system anomalies. Operational data that describes the activity within machines, such as sensor data, error signals, and digital commands, can aid in detecting cyberattacks. However, directly accessing this data in real-time from operational technology (OT) devices, like a 3D printer, can jeopardise the safety and performance of the manufacturing process on the factory floor.
“Typically, I have observed that manufacturing cybersecurity strategies rely on copies of network traffic that do not always help us see what is occurring inside a piece of machinery or process,” says NIST mechanical engineer Michael Pease, a co-author of the study. “As a result, some OT cybersecurity strategies seem analogous to observing the operations from the outside through a window; however, adversaries might have found a way onto the floor.”
Digital twins closely linked to physical counterparts
Digital twins are not just ordinary computer models but are closely linked to their physical counterparts, extracting real-time data from them. When inspecting a physical machine while it is running is not possible, digital twins offer a practical alternative.
In recent years, using digital twins for manufacturing machinery has provided engineers with abundant operational data, enabling them to carry out various tasks, including predicting when parts will require maintenance without affecting performance or safety.
The study's authors suggest that, in addition to identifying routine indicators of wear and tear, digital twins could help uncover hidden anomalies within manufacturing data.
“Because manufacturing processes produce such rich data sets — temperature, voltage, current — and they are so repetitive, there are opportunities to detect anomalies that stick out, including cyberattacks,” says Dawn Tilbury, a professor of mechanical engineering at the University of Michigan and study co-author.
To leverage the potential of digital twins for enhanced cybersecurity, the researchers developed a framework featuring a novel strategy, which they tested on a commercially available 3D printer.
The team created a digital twin to replicate the 3D printing process and supplied it with data from the actual printer. While the printer fabricated a plastic hourglass, computer programs monitored and analysed continuous data streams, including measured temperatures from the physical printing head and real-time temperatures computed by the digital twin.
The researchers simulated waves of disturbances on the printer, some of which were innocent anomalies, while others, like causing the printer to report inaccurate temperature readings, were more malicious. The framework employed a process of elimination to differentiate between a cyberattack and a routine system anomaly, despite the deluge of data available.
Machine learning models trained on large volumes of standard operating data, described in the paper, analysed the digital and real printers for patterns. The models could recognise the printer's normal operating conditions, allowing them to detect deviations from the ordinary.
If an irregularity was detected, the models passed it to other computer models that verified if the abnormal signals matched anything in a catalogue of known issues, such as the printer's fan cooling its printing head more than usual. The system then classified the anomaly as an expected deviation or a potential cyber threat.
In the final step, a human expert analysed the system's results and made a decision.
Human experts needed less over time
“The framework provides tools to systematically formalise the subject matter expert’s knowledge on anomaly detection,” says lead author Efe Balta, a former mechanical engineering graduate student at the University of Michigan and now a postdoctoral researcher at ETH Zurich. “If the framework hasn’t seen a certain anomaly before, a subject matter expert can analyse the collected data to provide further insights to be integrated into and improve the system.”
In most cases, the cybersecurity expert would verify the system's suspicions or teach it a new anomaly to add to the database. Over time, the models in the system would continue to learn, and the human expert would need to provide less input.
When it came to the 3D printer, the team validated the efficacy of their cybersecurity system, which accurately classified cyberattacks from typical anomalies by analysing both physical and simulated data.
While the initial results were promising, the researchers intend to investigate how the framework responds to more varied and aggressive attacks, ensuring its reliability and scalability. Their future plans could also involve implementing the strategy across a fleet of printers to assess whether expanding the coverage enhances or impairs detection capabilities.
“With further research, this framework could potentially be a huge win-win for both maintenance as well as monitoring for indications of compromised OT systems,” says Pease.
- Aon data breach: MOVEit data hack exposes major corporationsCyber Security
- Japan criticises Fujitsu for cloud security failingsCyber Security
- Imperva: Shadow AI set to drive new wave of insider threatsCyber Security
- New Cloudera research into EMEA enterprises and AI strugglesTechnology & AI