On the vulnerability of data-driven structural health monitoring models to adversarial attack
Author(s): |
Max David Champneys
Andre Green John Morales Moisés Silva David Mascareñas |
---|---|
Medium: | journal article |
Language(s): | English |
Published in: | Structural Health Monitoring, April 2021, n. 4, v. 20 |
Page(s): | 147592172092023 |
DOI: | 10.1177/1475921720920233 |
Abstract: |
Many approaches at the forefront of structural health monitoring rely on cutting-edge techniques from the field of machine learning. Recently, much interest has been directed towards the study of so-called adversarial examples; deliberate input perturbations that deceive machine learning models while remaining semantically identical. This article demonstrates that data-driven approaches to structural health monitoring are vulnerable to attacks of this kind. In the perfect information or ‘white-box’ scenario, a transformation is found that maps every example in the Los Alamos National Laboratory three-storey structure dataset to an adversarial example. Also presented is an adversarial threat model specific to structural health monitoring. The threat model is proposed with a view to motivate discussion into ways in which structural health monitoring approaches might be made more robust to the threat of adversarial attack. |
License: | This creative work has been published under the Creative Commons Attribution 4.0 International (CC-BY 4.0) license which allows copying, and redistribution as well as adaptation of the original work provided appropriate credit is given to the original author and the conditions of the license are met. |
4.71 MB
- About this
data sheet - Reference-ID
10562420 - Published on:
11/02/2021 - Last updated on:
09/07/2021