A unified multimodal control framework for human–robot interaction

In human–robot interaction, the robot controller must reactively adapt to sudden changes in the environment (due to unpredictable human behaviour). This often requires operating different modes, and managing sudden signal changes from heterogeneous sensor data. In this paper, we present a multimodal...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Robotics and autonomous systems 2015-08, Vol.70, p.106-115
Hauptverfasser: Cherubini, Andrea, Passama, Robin, Fraisse, Philippe, Crosnier, André
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In human–robot interaction, the robot controller must reactively adapt to sudden changes in the environment (due to unpredictable human behaviour). This often requires operating different modes, and managing sudden signal changes from heterogeneous sensor data. In this paper, we present a multimodal sensor-based controller, enabling a robot to adapt to changes in the sensor signals (here, changes in the human collaborator behaviour). Our controller is based on a unified task formalism, and in contrast with classical hybrid visicn–force–position control, it enables smooth transitions and weighted combinations of the sensor tasks. The approach is validated in a mock-up industrial scenario, where pose, vision (from both traditional camera and Kinect), and force tasks must be realized either exclusively or simultaneously, for human–robot collaboration. •A unified multimodal sensor-based control framework is proposed.•Pose, vision and force tasks can be realized either exclusively or simultaneously.•Self-adapting gains and homotopies between the tasks guarantee safe operation.•The approach is validated in an industrial task: collaborative screwing.
ISSN:0921-8890
1872-793X
DOI:10.1016/j.robot.2015.03.002