AC-WGAN-GP: Generating Labeled Samples for Improving Hyperspectral Image Classification with Small-Samples

The lack of labeled samples severely restricts the classification performance of deep learning on hyperspectral image classification. To solve this problem, Generative Adversarial Networks (GAN) are usually used for data augmentation. However, GAN have several problems with this task, such as the po...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Remote sensing (Basel, Switzerland) Switzerland), 2022-10, Vol.14 (19), p.4910
Hauptverfasser: Sun, Caihao, Zhang, Xiaohua, Meng, Hongyun, Cao, Xianghai, Zhang, Jinhua
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The lack of labeled samples severely restricts the classification performance of deep learning on hyperspectral image classification. To solve this problem, Generative Adversarial Networks (GAN) are usually used for data augmentation. However, GAN have several problems with this task, such as the poor quality of the generated samples and an unstable training process. Thereby, knowing how to construct a GAN to generate high-quality hyperspectral training samples is meaningful for the small-sample classification task of hyperspectral data. In this paper, an Auxiliary Classifier based Wasserstein GAN with Gradient Penalty (AC-WGAN-GP) was proposed. The framework includes AC-WGAN-GP, an online generation mechanism, and a sample selection algorithm. The proposed method has the following distinctive advantages. First, the input of the generator is guided by prior knowledge and a separate classifier is introduced to the architecture of AC-WGAN-GP to produce reliable labels. Second, an online generation mechanism ensures the diversity of generated samples. Third, generated samples that are similar to real data are selected. Experiments on three public hyperspectral datasets show that the generated samples follow the same distribution as the real samples and have enough diversity, which effectively expands the training set. Compared to other competitive methods, the proposed framework achieved better classification accuracy with a small number of labeled samples.
ISSN:2072-4292
2072-4292
DOI:10.3390/rs14194910