On the vulnerability of data-driven structural health monitoring models to adversarial attack
Autor(en): |
Max David Champneys
Andre Green John Morales Moisés Silva David Mascareñas |
---|---|
Medium: | Fachartikel |
Sprache(n): | Englisch |
Veröffentlicht in: | Structural Health Monitoring, April 2021, n. 4, v. 20 |
Seite(n): | 147592172092023 |
DOI: | 10.1177/1475921720920233 |
Abstrakt: |
Many approaches at the forefront of structural health monitoring rely on cutting-edge techniques from the field of machine learning. Recently, much interest has been directed towards the study of so-called adversarial examples; deliberate input perturbations that deceive machine learning models while remaining semantically identical. This article demonstrates that data-driven approaches to structural health monitoring are vulnerable to attacks of this kind. In the perfect information or ‘white-box’ scenario, a transformation is found that maps every example in the Los Alamos National Laboratory three-storey structure dataset to an adversarial example. Also presented is an adversarial threat model specific to structural health monitoring. The threat model is proposed with a view to motivate discussion into ways in which structural health monitoring approaches might be made more robust to the threat of adversarial attack. |
Lizenz: | Dieses Werk wurde unter der Creative-Commons-Lizenz Namensnennung 4.0 International (CC-BY 4.0) veröffentlicht und darf unter den Lizenzbedinungen vervielfältigt, verbreitet, öffentlich zugänglich gemacht, sowie abgewandelt und bearbeitet werden. Dabei muss der Urheber bzw. Rechteinhaber genannt und die Lizenzbedingungen eingehalten werden. |
4.71 MB
- Über diese
Datenseite - Reference-ID
10562420 - Veröffentlicht am:
11.02.2021 - Geändert am:
09.07.2021