Toward Better Planetary Surface Exploration by Orbital Imagery Inpainting

Planetary surface images are collected by sophisticated imaging devices onboard the orbiting spacecraft. Although these images enable scientists to discover and visualize the unknown, they often suffer from the `no-data' region because the data could not be acquired by the onboard instrument du...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE journal of selected topics in applied earth observations and remote sensing 2021, Vol.14, p.175-189
Hauptverfasser: Roy, Hiya, Chaudhury, Subhajit, Yamasaki, Toshihiko, Hashimoto, Tatsuaki
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Planetary surface images are collected by sophisticated imaging devices onboard the orbiting spacecraft. Although these images enable scientists to discover and visualize the unknown, they often suffer from the `no-data' region because the data could not be acquired by the onboard instrument due to the limitation in operation time of the instrument and satellite orbiter control. This greatly reduces the usability of the captured data for scientific purposes. To alleviate this problem, in this article, we propose a machine learning-based `no-data' region prediction algorithm. Specifically, we leverage a deep convolutional neural network (CNN) based image inpainting algorithm to predict such unphotographed pixels in a context-aware fashion using adversarial learning on planetary images. The benefit of using our proposed method is to augment features in the unphotographed regions leading to better downstream tasks such as interesting landmark classification. We use the Moon and Mars orbital images captured by the JAXA's Kaguya mission and NASA's Mars Reconnaissance Orbiter (MRO) for experimental purposes and demonstrate that our method can fill in the unphotographed regions on the Moon and Mars images with good visual and perceptual quality as measured by improved PSNR and SSIM scores. Additionally, our image inpainting algorithm helps in improved feature learning for CNN-based landmark classification as evidenced by an improved F1-score of 0.88 compared to 0.83 on the original Mars dataset.COMP: Please replace colons appearing after figure numbers and table numbers with period in all figure and table captions.
ISSN:1939-1404
2151-1535
DOI:10.1109/JSTARS.2020.3038778