Multiscale CNNs Ensemble Based Self-Learning for Hyperspectral Image Classification

Fully supervised methods for hyperspectral image (HSI) classification usually require a considerable number of training samples to obtain high classification accuracy. However, it is time-consuming and difficult to collect the training samples. Under this context, semisupervised learning, which can...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE geoscience and remote sensing letters 2020-09, Vol.17 (9), p.1593-1597
Hauptverfasser: Fang, Leyuan, Zhao, Wenke, He, Nanjun, Zhu, Jian
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Fully supervised methods for hyperspectral image (HSI) classification usually require a considerable number of training samples to obtain high classification accuracy. However, it is time-consuming and difficult to collect the training samples. Under this context, semisupervised learning, which can effectively augment the number of training samples and extract the underlying information among the unlabeled samples, gained much attention. In this letter, we propose a Multiscale convolutional neural networks (CNNs) Ensemble Based Self-Learning (MCE-SL) method for semisupervised HSI classification. Generally, the proposed MCE-SL method consists of the following two stages. In the first stage, the spatial information of different scales from limited labeled training samples are extracted to train several CNN models. In the second stage, the trained multiscale CNNs are used to classify the unlabeled samples. After error correction, the problem of label partially incorrect is alleviated, and unlabeled samples with high confidence will be added to the original training data set for the next training iteration. We conduct comprehensive experiments on two real HSI data sets, and the experimental results show that the proposed MCE-SL can obtain better classification performance compared with several traditional semisupervised methods in few iterations.
ISSN:1545-598X
1558-0571
DOI:10.1109/LGRS.2019.2950441