Adversarially Robust Hyperspectral Image Classification via Random Spectral Sampling and Spectral Shape Encoding

Although the hyperspectral image (HSI) classification has adopted deep neural networks (DNNs) and shown remarkable performances, there is a lack of studies of the adversarial vulnerability for the HSI classifications. In this paper, we propose a novel HSI classification framework robust to adversari...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2021, Vol.9, p.66791-66804
Hauptverfasser: Park, Sungjune, Lee, Hong Joo, Ro, Yong Man
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Although the hyperspectral image (HSI) classification has adopted deep neural networks (DNNs) and shown remarkable performances, there is a lack of studies of the adversarial vulnerability for the HSI classifications. In this paper, we propose a novel HSI classification framework robust to adversarial attacks. To this end, we focus on the unique spectral characteristic of HSIs ( i.e., distinctive spectral patterns of materials). With the spectral characteristic, we present the random spectral sampling and spectral shape feature encoding for the robust HSI classification. For the random spectral sampling, spectral bands are randomly sampled from the entire spectrum for each pixel of the input HSI. Also, the overall spectral shape information, which is robust to adversarial attacks, is fed into the shape feature extractor to acquire the spectral shape feature. Then, the proposed framework can provide the adversarial robustness of HSI classifiers via randomization effects and spectral shape feature encoding. To the best of our knowledge, the proposed framework is the first work dealing with the adversarial robustness in the HSI classification. In experiments, we verify that our framework improves the adversarial robustness considerably under diverse adversarial attack scenarios, and outperforms the existing adversarial defense methods.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2021.3076225