Applying Learning-from-observation to household service robots: three common-sense formulation
Utilizing a robot in a new application requires the robot to be programmed at each time. To reduce such programmings efforts, we have been developing ``Learning-from-observation (LfO)'' that automatically generates robot programs by observing human demonstrations. One of the main issues wi...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Utilizing a robot in a new application requires the robot to be programmed at
each time. To reduce such programmings efforts, we have been developing
``Learning-from-observation (LfO)'' that automatically generates robot programs
by observing human demonstrations. One of the main issues with introducing this
LfO system into the domain of household tasks is the cluttered environments,
which cause difficulty in determining which elements are important for task
execution when observing demonstrations. To overcome this issue, it is
necessary for the system to have common sense shared with the human
demonstrator. This paper addresses three relationships that LfO in the
household domain should focus on when observing demonstrations and proposes
representations to describe the common sense used by the demonstrator for
optimal execution of task sequences. Specifically, the paper proposes to use
labanotation to describe the postures between the environment and the robot,
contact-webs to describe the grasping methods between the robot and the tool,
and physical and semantic constraints to describe the motions between the tool
and the environment. Then, based on these representations, the paper formulates
task models, machine-independent robot programs, that indicate what to do and
how to do. Third, the paper explains the task encoder to obtain task models and
task decoder to execute the task models on the robot hardware. Finally, this
paper presents how the system actually works through several example scenes. |
---|---|
DOI: | 10.48550/arxiv.2304.09966 |