DeepMPCVS: Deep Model Predictive Control for Visual Servoing
4th Annual Conference on Robot Learning, CoRL 2020, Cambridge, MA, USA, November 16 - November 18, 2020 The simplicity of the visual servoing approach makes it an attractive option for tasks dealing with vision-based control of robots in many real-world applications. However, attaining precise align...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | 4th Annual Conference on Robot Learning, CoRL 2020, Cambridge, MA,
USA, November 16 - November 18, 2020 The simplicity of the visual servoing approach makes it an attractive option
for tasks dealing with vision-based control of robots in many real-world
applications. However, attaining precise alignment for unseen environments pose
a challenge to existing visual servoing approaches. While classical approaches
assume a perfect world, the recent data-driven approaches face issues when
generalizing to novel environments. In this paper, we aim to combine the best
of both worlds. We present a deep model predictive visual servoing framework
that can achieve precise alignment with optimal trajectories and can generalize
to novel environments. Our framework consists of a deep network for optical
flow predictions, which are used along with a predictive model to forecast
future optical flow. For generating an optimal set of velocities we present a
control network that can be trained on the fly without any supervision. Through
extensive simulations on photo-realistic indoor settings of the popular Habitat
framework, we show significant performance gain due to the proposed formulation
vis-a-vis recent state-of-the-art methods. Specifically, we show a faster
convergence and an improved performance in trajectory length over recent
approaches. |
---|---|
DOI: | 10.48550/arxiv.2105.00788 |