MoGaze: A Dataset of Full-Body Motions that Includes Workspace Geometry and Eye-Gaze
As robots become more present in open human environments, it will become crucial for robotic systems to understand and predict human motion. Such capabilities depend heavily on the quality and availability of motion capture data. However, existing datasets of full-body motion rarely include 1) long...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | As robots become more present in open human environments, it will become
crucial for robotic systems to understand and predict human motion. Such
capabilities depend heavily on the quality and availability of motion capture
data. However, existing datasets of full-body motion rarely include 1) long
sequences of manipulation tasks, 2) the 3D model of the workspace geometry, and
3) eye-gaze, which are all important when a robot needs to predict the
movements of humans in close proximity. Hence, in this paper, we present a
novel dataset of full-body motion for everyday manipulation tasks, which
includes the above. The motion data was captured using a traditional motion
capture system based on reflective markers. We additionally captured eye-gaze
using a wearable pupil-tracking device. As we show in experiments, the dataset
can be used for the design and evaluation of full-body motion prediction
algorithms. Furthermore, our experiments show eye-gaze as a powerful predictor
of human intent. The dataset includes 180 min of motion capture data with 1627
pick and place actions being performed. It is available at
https://humans-to-robots-motion.github.io/mogaze and is planned to be extended
to collaborative tasks with two humans in the near future. |
---|---|
DOI: | 10.48550/arxiv.2011.11552 |