Creating virtual sensors using learning based super resolution and data fusion
Designing, building, and launching missions to deploy space based sensors typically take many years and cost billions of dollars. Missions are often delayed or canceled, and data from some parts of the world may be unavailable. When a physical sensor is unavailable for any reason, we propose the not...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Designing, building, and launching missions to deploy space based sensors typically take many years and cost billions of dollars. Missions are often delayed or canceled, and data from some parts of the world may be unavailable. When a physical sensor is unavailable for any reason, we propose the notion of a virtual sensor, in which we exploit the hundreds of spaced based sensors already observing the earth along with statistical learning algorithms to fuse the data from multi-sensor data to estimate a virtual sensor image. The algorithm we present in this paper uses several physical source sensors to build a virtual target sensor with different characteristics and higher resolution than the source sensors. The approach is based on finding the target sensor data that maximizes the a-posteriori (MAP) probability. We solve the MAP problem by using a Bayesian network framework. We present a proof of concept case that shows we can predict the values of a 500 m resolution band 3 data, using 1 km resolution images from bands 8, 9, and 10, in moderate-resolution imaging spectroradiometer (MODIS). We test the performance of our algorithm by predicting target sensor data for which we have a ground truth data based on criterion of root mean square error. The results show the effectiveness of our approach. |
---|---|
ISSN: | 1095-323X 2996-2358 |
DOI: | 10.1109/AERO.2009.4839483 |