Generating robot gesture using a virtual agent framework

One of the crucial aspects in building sociable, communicative robots is to endow them with expressive nonverbal behaviors. Gesture is one such behavior, frequently used by human speakers to illustrate what they express in speech. The production of gestures, however, poses a number of challenges wit...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Salem, M, Kopp, S, Wachsmuth, I, Joublin, F
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:One of the crucial aspects in building sociable, communicative robots is to endow them with expressive nonverbal behaviors. Gesture is one such behavior, frequently used by human speakers to illustrate what they express in speech. The production of gestures, however, poses a number of challenges with regard to motor control for arbitrary, expressive hand-arm movement and its coordination with other interaction modalities. We describe an approach to enable the humanoid robot ASIMO to flexibly produce communicative gestures at run-time, building upon the Articulated Communicator Engine (ACE) that was developed to allow virtual agents to realize planned behavior representations on the spot. We present a control architecture that tightly couples ACE with ASIMO's perceptuo-motor system for multi-modal scheduling. In this way, we combine conceptual representation and planning with motor control primitives for meaningful arm movements of a physical robot body. First results of realized gesture representations are presented and discussed.
ISSN:2153-0858
2153-0866
DOI:10.1109/IROS.2010.5650572