Versatile Demonstration Interface: Toward More Flexible Robot Demonstration Collection
Previous methods for Learning from Demonstration leverage several approaches for a human to teach motions to a robot, including teleoperation, kinesthetic teaching, and natural demonstrations. However, little previous work has explored more general interfaces that allow for multiple demonstration ty...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Previous methods for Learning from Demonstration leverage several approaches
for a human to teach motions to a robot, including teleoperation, kinesthetic
teaching, and natural demonstrations. However, little previous work has
explored more general interfaces that allow for multiple demonstration types.
Given the varied preferences of human demonstrators and task characteristics, a
flexible tool that enables multiple demonstration types could be crucial for
broader robot skill training. In this work, we propose Versatile Demonstration
Interface (VDI), an attachment for collaborative robots that simplifies the
collection of three common types of demonstrations. Designed for flexible
deployment in industrial settings, our tool requires no additional
instrumentation of the environment. Our prototype interface captures human
demonstrations through a combination of vision, force sensing, and state
tracking (e.g., through the robot proprioception or AprilTag tracking). Through
a user study where we deployed our prototype VDI at a local manufacturing
innovation center with manufacturing experts, we demonstrated the efficacy of
our prototype in representative industrial tasks. Interactions from our study
exposed a range of industrial use cases for VDI, clear relationships between
demonstration preferences and task criteria, and insights for future tool
design. |
---|---|
DOI: | 10.48550/arxiv.2410.19141 |