Learning Spatial-Temporal Consistency for Satellite Image Sequence Prediction
As an extremely challenging spatial-temporal sequence prediction task, satellite image sequence prediction has various and significant applications in real-world scenarios. Although lots of deep learning prediction models are developed for spatial-temporal sequence prediction, the methods still deli...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on geoscience and remote sensing 2023-01, Vol.61, p.1-1 |
---|---|
Hauptverfasser: | , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | As an extremely challenging spatial-temporal sequence prediction task, satellite image sequence prediction has various and significant applications in real-world scenarios. Although lots of deep learning prediction models are developed for spatial-temporal sequence prediction, the methods still deliver unsatisfactory performance in terms of keeping spatial-temporal consistency, which leads to inaccurate and blurry satellite image sequence predictions. To maintain spatial-temporal consistency and achieve high-quality satellite image sequence prediction, we propose a novel and effective spatial-temporal consistency network (STCNet). In STCNet, a multi-level motion memory-based predictor is proposed to accurately predict motion patterns of satellite image sequences to ensure temporal consistency. Then, a time-variant frame discriminator is carefully designed and proposed, which can enhance the perception quality of predicted frames to guarantee spatial consistency and simultaneously maintain the motion coherency of predicted sequences. Moreover, a scheduled sampling strategy is proposed to reduce the optimizing difficulty and better train the proposed method. Comprehensive experiments conducted on satellite image sequences from FY-4A meteorological satellite verify the effectiveness, applicability, and adaptability of our proposed method compared to state-of-the-art approaches under challenging scenarios. |
---|---|
ISSN: | 0196-2892 1558-0644 |
DOI: | 10.1109/TGRS.2023.3303947 |