Generative Adversarial Network-Based Full-Space Domain Adaptation for Land Cover Classification From Multiple-Source Remote Sensing Images
The accuracy of remote sensing image segmentation and classification is known to dramatically decrease when the source and target images are from different sources; while deep learning-based models have boosted performance, they are only effective when trained with a large number of labeled source i...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on geoscience and remote sensing 2021-05, Vol.59 (5), p.3816-3828 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The accuracy of remote sensing image segmentation and classification is known to dramatically decrease when the source and target images are from different sources; while deep learning-based models have boosted performance, they are only effective when trained with a large number of labeled source images that are similar to the target images. In this article, we propose a generative adversarial network (GAN) based domain adaptation for land cover classification using new target remote sensing images that are enormously different from the labeled source images. In GANs, the source and target images are fully aligned in the image space, feature space, and output space domains in two stages via adversarial learning. The source images are translated to the style of the target images, which are then used to train a fully convolutional network (FCN) for semantic segmentation to classify the land cover types of the target images. The domain adaptation and segmentation are integrated to form an end-to-end framework. The experiments that we conducted on a multisource data set covering more than 3500 km 2 with 51 560 256\times 256 high-resolution satellite images in Wuhan city and a cross-city data set with 11 383\,\,256\times 256 aerial images in Potsdam and Vaihingen demonstrated that our method exceeded the recent GAN-based domain adaptation methods by at least 6.1% and 4.9% in the mean intersection over union (mIoU) and overall accuracy (OA) indexes, respectively. We also proved that our GAN is a generic framework that can be implemented for other domain transfer methods to boost their performance. |
---|---|
ISSN: | 0196-2892 1558-0644 |
DOI: | 10.1109/TGRS.2020.3020804 |