Medical images under tampering

Attacks on deep learning models are a constant threat in the world today. As more deep learning models and artificial intelligence (AI) are being implemented across different industries, the likelihood of them being attacked increases dramatically. In this context, the medical domain is of the great...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Multimedia tools and applications 2024-01, Vol.83 (24), p.65407-65439
Hauptverfasser: Tsai, Min-Jen, Lin, Ping-Ying
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Attacks on deep learning models are a constant threat in the world today. As more deep learning models and artificial intelligence (AI) are being implemented across different industries, the likelihood of them being attacked increases dramatically. In this context, the medical domain is of the greatest concern because an erroneous decision made by AI could have a catastrophic outcome and even lead to death. Therefore, a systematic procedure is built in this study to determine how well these medical images can resist a specific adversarial attack, i.e. a one-pixel attack. This may not be the strongest attack, but it is simple and effective, and it could occur by accident or an equipment malfunction. The results of the experiment show that it is difficult for medical images to survive a one-pixel attack.
ISSN:1573-7721
1380-7501
1573-7721
DOI:10.1007/s11042-023-17968-1