Learning Visuomotor Policies for Aerial Navigation Using Cross-Modal Representations
Machines are a long way from robustly solving open-world perception-control tasks, such as first-person view (FPV) aerial navigation. While recent advances in end-to-end Machine Learning, especially Imitation and Reinforcement Learning appear promising, they are constrained by the need of large amou...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Machines are a long way from robustly solving open-world perception-control
tasks, such as first-person view (FPV) aerial navigation. While recent advances
in end-to-end Machine Learning, especially Imitation and Reinforcement Learning
appear promising, they are constrained by the need of large amounts of
difficult-to-collect labeled real-world data. Simulated data, on the other
hand, is easy to generate, but generally does not render safe behaviors in
diverse real-life scenarios. In this work we propose a novel method for
learning robust visuomotor policies for real-world deployment which can be
trained purely with simulated data. We develop rich state representations that
combine supervised and unsupervised environment data. Our approach takes a
cross-modal perspective, where separate modalities correspond to the raw camera
data and the system states relevant to the task, such as the relative pose of
gates to the drone in the case of drone racing. We feed both data modalities
into a novel factored architecture, which learns a joint low-dimensional
embedding via Variational Auto Encoders. This compact representation is then
fed into a control policy, which we trained using imitation learning with
expert trajectories in a simulator. We analyze the rich latent spaces learned
with our proposed representations, and show that the use of our cross-modal
architecture significantly improves control policy performance as compared to
end-to-end learning or purely unsupervised feature extractors. We also present
real-world results for drone navigation through gates in different track
configurations and environmental conditions. Our proposed method, which runs
fully onboard, can successfully generalize the learned representations and
policies across simulation and reality, significantly outperforming baseline
approaches.
Supplementary video: https://youtu.be/VKc3A5HlUU8 |
---|---|
DOI: | 10.48550/arxiv.1909.06993 |