Abstract
Model-based defenses have been promoted over the past decade as essential defenses against intrusion and data deception attacks into the control network used to digitally regulate the operation of critical industrial systems such as nuclear reactors. The idea is that physics-based models could differentiate between genuine, i.e., unaltered by adversaries, and malicious network engineering data, e.g., flowrates, temperatures, etc. Machine learning techniques have also been proposed to further improve the differentiating power of model-based defenses by constantly monitoring the engineering data for any possible deviations that are not consistent with the physics. While this is a sound premise, critical systems, such as nuclear reactors, chemical plants, oil and gas plants, etc., share a common disadvantage: almost any information about them can be obtained by determined adversaries, such as state-sponsored attackers. Thus, one must question whether model-based defenses would be resilient under these extreme adversarial conditions. This paper represents a first step toward answering this question. Specifically, we introduce self-learning techniques, including both pure data-driven, e.g., deep neural networks, and physics-based techniques able to predict dynamic behavior for a nuclear reactor model. The results indicate that if attackers are technically capable, they can learn very accurate models for reactor behavior, which raises concerns about the effectiveness of model-based defenses.
Acknowledgments
This work was supported by Sandia National Laboratories under the Laboratory Directed Research and Development program; by the U.S. Department of Energy under the Nuclear Energy University Program project DE-NE0008705; by Purdue University School of Nuclear Engineering under internal funding; and by the National Science Foundation under grants IIS-1636891 and ACI-1547358.