On the vulnerability of data-driven structural health monitoring models to adversarial attack
Auteur(s): |
Max David Champneys
Andre Green John Morales Moisés Silva David Mascareñas |
---|---|
Médium: | article de revue |
Langue(s): | anglais |
Publié dans: | Structural Health Monitoring, avril 2021, n. 4, v. 20 |
Page(s): | 147592172092023 |
DOI: | 10.1177/1475921720920233 |
Abstrait: |
Many approaches at the forefront of structural health monitoring rely on cutting-edge techniques from the field of machine learning. Recently, much interest has been directed towards the study of so-called adversarial examples; deliberate input perturbations that deceive machine learning models while remaining semantically identical. This article demonstrates that data-driven approaches to structural health monitoring are vulnerable to attacks of this kind. In the perfect information or ‘white-box’ scenario, a transformation is found that maps every example in the Los Alamos National Laboratory three-storey structure dataset to an adversarial example. Also presented is an adversarial threat model specific to structural health monitoring. The threat model is proposed with a view to motivate discussion into ways in which structural health monitoring approaches might be made more robust to the threat of adversarial attack. |
License: | Cette oeuvre a été publiée sous la license Creative Commons Attribution 4.0 (CC-BY 4.0). Il est autorisé de partager et adapter l'oeuvre tant que l'auteur est crédité et la license est indiquée (avec le lien ci-dessus). Vous devez aussi indiquer si des changements on été fait vis-à-vis de l'original. |
4.71 MB
- Informations
sur cette fiche - Reference-ID
10562420 - Publié(e) le:
11.02.2021 - Modifié(e) le:
09.07.2021