MSTCGAN: Multiscale Time Conditional Generative Adversarial Network for Long-Term Satellite Image Sequence Prediction

Satellite image sequence prediction is a crucial and challenging task. Previous studies leverage optical flow methods or existing deep learning methods on spatial-temporal sequence models for the task. However, they suffer from either oversimplified model assumptions or blurry predictions and sequen...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on geoscience and remote sensing 2022, Vol.60, p.1-16
Hauptverfasser: Dai, Kuai, Li, Xutao, Ye, Yunming, Feng, Shanshan, Qin, Danyu, Ye, Rui
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Satellite image sequence prediction is a crucial and challenging task. Previous studies leverage optical flow methods or existing deep learning methods on spatial-temporal sequence models for the task. However, they suffer from either oversimplified model assumptions or blurry predictions and sequential error accumulation issue, for a long-term forecast requirement. In this article, we propose a novel multiscale time conditional generative adversarial network (MSTCGAN). To address the sequential error accumulation issue, MSTCGAN adopts a parallel prediction framework to produce the future image sequences by a one-hot time condition input. In addition, a powerful multiscale generator is designed with the multihead axial attention, which helps to carefully preserve the fine-grained details for appearance consistency. Moreover, we develop a temporal discriminator to address the blurry issue and maintain the motion consistency in prediction. Extensive experiments have been conducted on the FengYun-4A satellite dataset, and the results demonstrate the effectiveness and superiority of the proposed method over state-of-the-art approaches.
ISSN:0196-2892
1558-0644
DOI:10.1109/TGRS.2022.3181279