Learning to Control Complex Robots Using High-Dimensional Interfaces: Preliminary Insights
Human body motions can be captured as a high-dimensional continuous signal using motion sensor technologies. The resulting data can be surprisingly rich in information, even when captured from persons with limited mobility. In this work, we explore the use of limited upper-body motions, captured via...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Human body motions can be captured as a high-dimensional continuous signal
using motion sensor technologies. The resulting data can be surprisingly rich
in information, even when captured from persons with limited mobility. In this
work, we explore the use of limited upper-body motions, captured via motion
sensors, as inputs to control a 7 degree-of-freedom assistive robotic arm. It
is possible that even dense sensor signals lack the salient information and
independence necessary for reliable high-dimensional robot control. As the
human learns over time in the context of this limitation, intelligence on the
robot can be leveraged to better identify key learning challenges, provide
useful feedback, and support individuals until the challenges are managed. In
this short paper, we examine two uninjured participants' data from an ongoing
study, to extract preliminary results and share insights. We observe
opportunities for robot intelligence to step in, including the identification
of inconsistencies in time spent across all control dimensions, asymmetries in
individual control dimensions, and user progress in learning. Machine reasoning
about these situations may facilitate novel interface learning in the future. |
---|---|
DOI: | 10.48550/arxiv.2110.04663 |