Impact of Adversarial Examples on the Efficiency of Interpretation and Use of Information from High-Tech Medical Images
In this paper we discuss the possibility of adversarial examples appearance in high-tech medical images (Computer tomography and Magnetic resonance imaging), due to the noise inherent in the technology of their formation, and therefore we suggest ways to counteract this effect. As the idea of the pa...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this paper we discuss the possibility of adversarial examples appearance in high-tech medical images (Computer tomography and Magnetic resonance imaging), due to the noise inherent in the technology of their formation, and therefore we suggest ways to counteract this effect. As the idea of the paper we put two questions: 1. Can individual instances of real high-tech medical images work as AE when being analyzed with the use of neural networks? 2. Is it possible to defend oneself against such «natural» adversarial attacks with the simplest possible means? In our research, we tried the following defence methods: adversarial training, Gaussian data augmentation and bounded RELU (see section 3 for a detailed description). We conducted the experiment with the use of the neural network - a variant of convolutional network structure combining U-Net with the region proposal networks. As the source data two datasets were chosen - the Lung Image Database Consortium image collection containing 1018 lung cancer screening thoracic CT scans and Brain MRI DataSet containing clinical imaging data of glioma patients (a total of 274 cases). The experiments showed that the degree of manifestation of AE varies depending on the type of training model. When training a model not using techniques of defences on adversarial examples, the number of incorrectly recognized images is quite large (200 per 10,000 for CT and 285 per 10,000 for MRI). By proper selecting of the activation function of CNN, it can be reduced to 60 and 68, respectively. With augmentation of training dataset by Gaussian noised images, this number drops to 21 and 26. An even greater reduction in the number of incorrectly recognized images is achieved using the Adversarial Training method −12 and 15. Thus, it is shown that the adversarial effect is possible after the application of adversarial training techniques, but the degree of noise in such an image will be much higher than before using these techniques, and it will be easy enough for the doctor to recognize them visually and exclude them from further consideration. |
---|---|
ISSN: | 2305-7254 2305-7254 2343-0737 |
DOI: | 10.23919/FRUCT.2019.8711974 |