Cloud Cover Prediction Model Using Multichannel Geostationary Satellite Images

Cloud cover influences solar radiation reaching the Earth's surface, impacting industries. Recently, advancements in weather prediction have been made through the use of satellite images and deep learning methods for enhancing the accuracy of cloud variability forecasts. Despite these advanceme...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on geoscience and remote sensing 2024, Vol.62, p.1-14
Hauptverfasser: Cho, Eunbin, Kim, Eunbin, Choi, Yeji
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Cloud cover influences solar radiation reaching the Earth's surface, impacting industries. Recently, advancements in weather prediction have been made through the use of satellite images and deep learning methods for enhancing the accuracy of cloud variability forecasts. Despite these advancements, computational limitations arise due to the large size of the satellite images. Although conventional practices involving the cropping or downscaling of images into smaller sizes have been used, these processes have been observed to compromise the accuracy of the predicted images. In this study, we introduce Cloudstream, a novel approach that combines a convolutional neural network (CNN)-based encoder, decoder, and PredRNN-V2 as a backbone model. This approach prioritizes computational efficiency while also maintaining prediction accuracy. Cloudstream predicted future cloud detection image data, training with the dataset of the sequential cloud detection and infrared channel images from the Korean Geostationary Meteorological Satellite GEO-KOMPSAT-2A. In addition, we explored the utilization of nonpatch images in the development of Cloudstream. A quantitative evaluation of the model was performed using two different input sizes for the same geographic area: 128\times 128 pixels and 512\times 512 pixels. There are no significant differences in F1 scores between Cloudstream and PredRNN-V2 when processing 128\times 128 inputs; however, Cloudstream required three times fewer floating-point operations (FLOPs) than PredRNN-V2. In addition, we found that 512\times 512 high-resolution input images exhibit superior prediction performance compared with 128\times 128 low-resolution input images. This study contributes to the refinement of deep-learning-based video frame prediction models by focusing on optimizing satellite image prediction, addressing computational challenges.
ISSN:0196-2892
1558-0644
DOI:10.1109/TGRS.2024.3473992