GSGAN: Learning controllable geospatial images generation

Compared with natural images, geospatial images cover larger area and have more complex image contents. There are few algorithms for generating controllable geospatial images, and their results are of low quality. In response to this problem, this paper proposes Geospatial Style Generative Adversari...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IET image processing 2023-02, Vol.17 (2), p.401-417
Hauptverfasser: Su, Xingzhe, Lin, Yijun, Zheng, Quan, Wu, Fengge, Zheng, Changwen, Zhao, Junsuo
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Compared with natural images, geospatial images cover larger area and have more complex image contents. There are few algorithms for generating controllable geospatial images, and their results are of low quality. In response to this problem, this paper proposes Geospatial Style Generative Adversarial Network to generate controllable and high‐quality geospatial images. Current conditional generators suffer the mode collapse problem in geospatial field. The problem is addressed via a modified mode seeking regularization term with contrastive learning theory. Besides, the discriminator network architecture is modified to process global feature information and texture information of geospatial images. Feature loss in the generator is introduced to stabilize the training process and improve generated image quality. Comprehensive experiments are conducted on UC Merced Land Use Dataset, NWPU‐RESISC45 Dataset, and AID Dataset to evaluate all compared methods. Experiment results show our method outperforms state‐of‐the‐art models. Our method not only generates high‐quality and controllable geospatial images, but also enhances the discriminator to learn better representations.
ISSN:1751-9659
1751-9667
DOI:10.1049/ipr2.12641