Learning to Get Up
Getting up from an arbitrary fallen state is a basic human skill. Existing methods for learning this skill often generate highly dynamic and erratic get-up motions, which do not resemble human get-up strategies, or are based on tracking recorded human get-up motions. In this paper, we present a stag...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Getting up from an arbitrary fallen state is a basic human skill. Existing
methods for learning this skill often generate highly dynamic and erratic
get-up motions, which do not resemble human get-up strategies, or are based on
tracking recorded human get-up motions. In this paper, we present a staged
approach using reinforcement learning, without recourse to motion capture data.
The method first takes advantage of a strong character model, which facilitates
the discovery of solution modes. A second stage then learns to adapt the
control policy to work with progressively weaker versions of the character.
Finally, a third stage learns control policies that can reproduce the weaker
get-up motions at much slower speeds. We show that across multiple runs, the
method can discover a diverse variety of get-up strategies, and execute them at
a variety of speeds. The results usually produce policies that use a final
stand-up strategy that is common to the recovery motions seen from all initial
states. However, we also find policies for which different strategies are seen
for prone and supine initial fallen states. The learned get-up control
strategies often have significant static stability, i.e., they can be paused at
a variety of points during the get-up motion. We further test our method on
novel constrained scenarios, such as having a leg and an arm in a cast. |
---|---|
DOI: | 10.48550/arxiv.2205.00307 |