Non-Prehensile Aerial Manipulation using Model-Based Deep Reinforcement Learning
With the continual adoption of Uncrewed Aerial Vehicles (UAVs) across a wide-variety of application spaces, robust aerial manipulation remains a key research challenge. Aerial manipulation tasks require interacting with objects in the environment, often without knowing their dynamical properties lik...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | With the continual adoption of Uncrewed Aerial Vehicles (UAVs) across a
wide-variety of application spaces, robust aerial manipulation remains a key
research challenge. Aerial manipulation tasks require interacting with objects
in the environment, often without knowing their dynamical properties like mass
and friction a priori. Additionally, interacting with these objects can have a
significant impact on the control and stability of the vehicle. We investigated
an approach for robust control and non-prehensile aerial manipulation in
unknown environments. In particular, we use model-based Deep Reinforcement
Learning (DRL) to learn a world model of the environment while simultaneously
learning a policy for interaction with the environment. We evaluated our
approach on a series of push tasks by moving an object between goal locations
and demonstrated repeatable behaviors across a range of friction values. |
---|---|
DOI: | 10.48550/arxiv.2407.00889 |