Eyes in the Dark: Distributed Scene Understanding for Disaster Management

Robotic is a great substitute for human to explore the dangerous areas, and will also be a great help for disaster management. Although the rise of depth sensor technologies gives a huge boost to robotic vision research, traditional approaches cannot be applied to disaster-handling robots directly d...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on parallel and distributed systems 2017-12, Vol.28 (12), p.3458-3471
Hauptverfasser: Li, Liangzhi, Ota, Kaoru, Dong, Mianxiong, Borjigin, Wuyunzhaola
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Robotic is a great substitute for human to explore the dangerous areas, and will also be a great help for disaster management. Although the rise of depth sensor technologies gives a huge boost to robotic vision research, traditional approaches cannot be applied to disaster-handling robots directly due to some limitations. In this paper, we focus on the 3D robotic perception, and propose a view-invariant Convolutional Neural Network (CNN) Model for scene understanding in disaster scenarios. The proposed system is highly distributed and parallel, which is of great help to improve the efficiency of network training. In our system, two individual CNNs are used to, respectively, propose objects from input data and classify their categories. We attempt to overcome the difficulties and restrictions caused by disasters using several specially-designed multi-task loss functions. The most significant advantage in our work is that the proposed method can learn a view-invariant feature with no requirement on RGB data, which is essential for harsh, disordered and changeable environments. Additionally, an effective optimization algorithm to accelerate the learning process is also included in our work. Simulations demonstrate that our approach is robust and efficient, and outperforms the state-of-the-art in several related tasks.
ISSN:1045-9219
1558-2183
DOI:10.1109/TPDS.2017.2740294