Creating Synthetic Radar Imagery Using Convolutional Neural Networks

In this work deep convolutional neural networks (CNNs) are shown to be an effective model for fusing heterogeneous geospatial data to create radar-like analyses of precipitation intensity (i.e., synthetic radar). The CNN trained in this work has a directed acyclic graph (DAG) structure that takes in...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of atmospheric and oceanic technology 2018-12, Vol.35 (12), p.2323-2338
Hauptverfasser: Veillette, Mark S., Hassey, Eric P., Mattioli, Christopher J., Iskenderian, Haig, Lamey, Patrick M.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In this work deep convolutional neural networks (CNNs) are shown to be an effective model for fusing heterogeneous geospatial data to create radar-like analyses of precipitation intensity (i.e., synthetic radar). The CNN trained in this work has a directed acyclic graph (DAG) structure that takes inputs from multiple data sources with varying spatial resolutions. These data sources include geostationary satellite (1-km visible and four 4-km infrared bands), lightning flash density from Earth Network’s Total Lightning Network, and numerical model data from NOAA’s 13-km Rapid Refresh model. A regression is performed in the final layer of the network using NEXRAD-derived data mapped onto a 1-km grid as a target variable. The outputs of the CNN are fused with analyses from NEXRAD to create seamless radar mosaics that extend to offshore sectors and beyond. The model is calibrated and validated using both NEXRAD and spaceborne radar from NASA’s Global Precipitation Measurement (GPM) Mission’s Core Observatory satellite. The advantages over a random forest–based approach used in previous works are discussed.
ISSN:0739-0572
1520-0426
DOI:10.1175/JTECH-D-18-0010.1