Real‐Time Coordination in Human‐Robot Interaction Using Face and Voice

When humans interact and collaborate with each other, they coordinate their turn‐taking behaviors using verbal and nonverbal signals, expressed in the face and voice. If robots of the future are supposed to engage in social interaction with humans, it is essential that they can generate and understa...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:The AI magazine 2016-12, Vol.37 (4), p.19-31
1. Verfasser: Skantze, Gabriel
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:When humans interact and collaborate with each other, they coordinate their turn‐taking behaviors using verbal and nonverbal signals, expressed in the face and voice. If robots of the future are supposed to engage in social interaction with humans, it is essential that they can generate and understand these behaviors. In this article, I give an overview of several studies that show how humans in interaction with a humanlike robot make use of the same coordination signals typically found in studies on human‐human interaction, and that it is possible to automatically detect and combine these cues to facilitate real‐time coordination. The studies also show that humans react naturally to such signals when used by a robot, without being given any special instructions. They follow the gaze of the robot to disambiguate referring expressions, they conform when the robot selects the next speaker using gaze, and they respond naturally to subtle cues, such as gaze aversion, breathing, facial gestures, and hesitation sounds.
ISSN:0738-4602
2371-9621
DOI:10.1609/aimag.v37i4.2686