Shared environment representation for a human-robot team performing information fusion
This paper addresses the problem of building a shared environment representation by a human‐robot team. Rich environment models are required in real applications for both autonomous operation of robots and to support human decision‐making. Two probabilistic models are used to describe outdoor enviro...
Gespeichert in:
Veröffentlicht in: | Journal of field robotics 2007-11, Vol.24 (11-12), p.911-942 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This paper addresses the problem of building a shared environment representation by a human‐robot team. Rich environment models are required in real applications for both autonomous operation of robots and to support human decision‐making. Two probabilistic models are used to describe outdoor environment features such as trees: geometric (position in the world) and visual. The visual representation is used to improve data association and to classify features. Both models are able to incorporate observations from robotic platforms and human operators. Physically, humans and robots form a heterogeneous sensor network. In our experiments, the human‐robot team consists of an unmanned air vehicle, a ground vehicle, and two human operators. They are deployed for an information gathering task and perform information fusion cooperatively. All aspects of the system including the fusion algorithms are fully decentralized. Experimental results are presented in form of the acquired multi‐attribute feature map, information exchange patterns demonstrating human‐robot information fusion, and quantitative model evaluation. Learned lessons from deploying the system in the field are also presented. © 2007 Wiley Periodicals, Inc. |
---|---|
ISSN: | 1556-4959 1556-4967 |
DOI: | 10.1002/rob.20201 |