An Attention Transfer Model for Human-Assisted Failure Avoidance in Robot Manipulations
Due to real-world dynamics and hardware uncertainty, robots inevitably fail in task executions, resulting in undesired or even dangerous executions. In order to avoid failures and improve robot performance, it is critical to identify and correct abnormal robot executions at an early stage. However,...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Due to real-world dynamics and hardware uncertainty, robots inevitably fail
in task executions, resulting in undesired or even dangerous executions. In
order to avoid failures and improve robot performance, it is critical to
identify and correct abnormal robot executions at an early stage. However, due
to limited reasoning capability and knowledge storage, it is challenging for
robots to self-diagnose and -correct their own abnormality in both planning and
executing. To improve robot self diagnosis capability, in this research a novel
human-to-robot attention transfer (\textit{\textbf{H2R-AT}}) method was
developed to identify robot manipulation errors by leveraging human
instructions. \textit{\textbf{H2R-AT}} was developed by fusing attention
mapping mechanism into a novel stacked neural networks model, transferring
human verbal attention into robot visual attention. With the attention
transfer, a robot understands \textit{what} and \textit{where} human concerns
are to identify and correct abnormal manipulations. Two representative task
scenarios: ``serve water for a human in a kitchen" and ``pick up a defective
gear in a factory" were designed in a simulation framework CRAIhri with
abnormal robot manipulations; and $252$ volunteers were recruited to provide
about 12000 verbal reminders to learn and test \textit{\textbf{H2R-AT}}. The
method effectiveness was validated by the high accuracy of $73.68\%$ in
transferring attention, and the high accuracy of $66.86\%$ in avoiding grasping
failures. |
---|---|
DOI: | 10.48550/arxiv.2002.04242 |