Learning Control Policies for Fall prevention and safety in bipedal locomotion
The ability to recover from an unexpected external perturbation is a fundamental motor skill in bipedal locomotion. An effective response includes the ability to not just recover balance and maintain stability but also to fall in a safe manner when balance recovery is physically infeasible. For robo...
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The ability to recover from an unexpected external perturbation is a
fundamental motor skill in bipedal locomotion. An effective response includes
the ability to not just recover balance and maintain stability but also to fall
in a safe manner when balance recovery is physically infeasible. For robots
associated with bipedal locomotion, such as humanoid robots and assistive
robotic devices that aid humans in walking, designing controllers which can
provide this stability and safety can prevent damage to robots or prevent
injury related medical costs. This is a challenging task because it involves
generating highly dynamic motion for a high-dimensional, non-linear and
under-actuated system with contacts. Despite prior advancements in using
model-based and optimization methods, challenges such as requirement of
extensive domain knowledge, relatively large computational time and limited
robustness to changes in dynamics still make this an open problem. In this
thesis, to address these issues we develop learning-based algorithms capable of
synthesizing push recovery control policies for two different kinds of robots :
Humanoid robots and assistive robotic devices that assist in bipedal
locomotion. Our work can be branched into two closely related directions : 1)
Learning safe falling and fall prevention strategies for humanoid robots and 2)
Learning fall prevention strategies for humans using a robotic assistive
devices. To achieve this, we introduce a set of Deep Reinforcement Learning
(DRL) algorithms to learn control policies that improve safety while using
these robots. |
---|---|
DOI: | 10.48550/arxiv.2201.01361 |