Learning Online from Corrective Feedback: A Meta-Algorithm for Robotics
A key challenge in Imitation Learning (IL) is that optimal state actions demonstrations are difficult for the teacher to provide. For example in robotics, providing kinesthetic demonstrations on a robotic manipulator requires the teacher to control multiple degrees of freedom at once. The difficulty...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | A key challenge in Imitation Learning (IL) is that optimal state actions
demonstrations are difficult for the teacher to provide. For example in
robotics, providing kinesthetic demonstrations on a robotic manipulator
requires the teacher to control multiple degrees of freedom at once. The
difficulty of requiring optimal state action demonstrations limits the space of
problems where the teacher can provide quality feedback. As an alternative to
state action demonstrations, the teacher can provide corrective feedback such
as their preferences or rewards. Prior work has created algorithms designed to
learn from specific types of noisy feedback, but across teachers and tasks
different forms of feedback may be required. Instead we propose that in order
to learn from a diversity of scenarios we need to learn from a variety of
feedback. To learn from a variety of feedback we make the following insight:
the teacher's cost function is latent and we can model a stream of feedback as
a stream of loss functions. We then use any online learning algorithm to
minimize the sum of these losses. With this insight we can learn from a
diversity of feedback that is weakly correlated with the teacher's true cost
function. We unify prior work into a general corrective feedback meta-algorithm
and show that regardless of feedback we can obtain the same regret bounds. We
demonstrate our approach by learning to perform a household navigation task on
a robotic racecar platform. Our results show that our approach can learn
quickly from a variety of noisy feedback. |
---|---|
DOI: | 10.48550/arxiv.2104.01021 |