DeepMPCVS: Deep Model Predictive Control for Visual Servoing
4th Annual Conference on Robot Learning, CoRL 2020, Cambridge, MA, USA, November 16 - November 18, 2020 The simplicity of the visual servoing approach makes it an attractive option for tasks dealing with vision-based control of robots in many real-world applications. However, attaining precise align...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Katara, Pushkal Harish, Y V S Pandya, Harit Gupta, Abhinav Sanchawala, Aadil Mehdi Kumar, Gourav Bhowmick, Brojeshwar K, Madhava Krishna |
description | 4th Annual Conference on Robot Learning, CoRL 2020, Cambridge, MA,
USA, November 16 - November 18, 2020 The simplicity of the visual servoing approach makes it an attractive option
for tasks dealing with vision-based control of robots in many real-world
applications. However, attaining precise alignment for unseen environments pose
a challenge to existing visual servoing approaches. While classical approaches
assume a perfect world, the recent data-driven approaches face issues when
generalizing to novel environments. In this paper, we aim to combine the best
of both worlds. We present a deep model predictive visual servoing framework
that can achieve precise alignment with optimal trajectories and can generalize
to novel environments. Our framework consists of a deep network for optical
flow predictions, which are used along with a predictive model to forecast
future optical flow. For generating an optimal set of velocities we present a
control network that can be trained on the fly without any supervision. Through
extensive simulations on photo-realistic indoor settings of the popular Habitat
framework, we show significant performance gain due to the proposed formulation
vis-a-vis recent state-of-the-art methods. Specifically, we show a faster
convergence and an improved performance in trajectory length over recent
approaches. |
doi_str_mv | 10.48550/arxiv.2105.00788 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2105_00788</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2105_00788</sourcerecordid><originalsourceid>FETCH-LOGICAL-a678-4ca184d5cb906efed2bdd87f08a4d5eef3551ef6850f075df70a196cf9f5fdcf3</originalsourceid><addsrcrecordid>eNotj81uwjAQhH3poaJ9gJ7qF0hYk2zsVL2gtAUkUJFAXCNj71aWUowMjdq3L3-nGc3h03xCPCnIS4MIQ5t-Q5-PFGAOoI25F69vRPvFstmsXuS5ykX01MllIh_cMfQkm7g7pthJjkluwuHHdnJFqY9h9_Ug7th2B3q85UCsP97XzTSbf05mzXie2UqbrHRWmdKj29ZQEZMfbb03msHY00rEBaIirgwCg0bPGqyqK8c1I3vHxUA8X7GX--0-hW-b_tqzRnvRKP4Bc_pCuA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>DeepMPCVS: Deep Model Predictive Control for Visual Servoing</title><source>arXiv.org</source><creator>Katara, Pushkal ; Harish, Y V S ; Pandya, Harit ; Gupta, Abhinav ; Sanchawala, Aadil Mehdi ; Kumar, Gourav ; Bhowmick, Brojeshwar ; K, Madhava Krishna</creator><creatorcontrib>Katara, Pushkal ; Harish, Y V S ; Pandya, Harit ; Gupta, Abhinav ; Sanchawala, Aadil Mehdi ; Kumar, Gourav ; Bhowmick, Brojeshwar ; K, Madhava Krishna</creatorcontrib><description>4th Annual Conference on Robot Learning, CoRL 2020, Cambridge, MA,
USA, November 16 - November 18, 2020 The simplicity of the visual servoing approach makes it an attractive option
for tasks dealing with vision-based control of robots in many real-world
applications. However, attaining precise alignment for unseen environments pose
a challenge to existing visual servoing approaches. While classical approaches
assume a perfect world, the recent data-driven approaches face issues when
generalizing to novel environments. In this paper, we aim to combine the best
of both worlds. We present a deep model predictive visual servoing framework
that can achieve precise alignment with optimal trajectories and can generalize
to novel environments. Our framework consists of a deep network for optical
flow predictions, which are used along with a predictive model to forecast
future optical flow. For generating an optimal set of velocities we present a
control network that can be trained on the fly without any supervision. Through
extensive simulations on photo-realistic indoor settings of the popular Habitat
framework, we show significant performance gain due to the proposed formulation
vis-a-vis recent state-of-the-art methods. Specifically, we show a faster
convergence and an improved performance in trajectory length over recent
approaches.</description><identifier>DOI: 10.48550/arxiv.2105.00788</identifier><language>eng</language><subject>Computer Science - Robotics</subject><creationdate>2021-05</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2105.00788$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2105.00788$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Katara, Pushkal</creatorcontrib><creatorcontrib>Harish, Y V S</creatorcontrib><creatorcontrib>Pandya, Harit</creatorcontrib><creatorcontrib>Gupta, Abhinav</creatorcontrib><creatorcontrib>Sanchawala, Aadil Mehdi</creatorcontrib><creatorcontrib>Kumar, Gourav</creatorcontrib><creatorcontrib>Bhowmick, Brojeshwar</creatorcontrib><creatorcontrib>K, Madhava Krishna</creatorcontrib><title>DeepMPCVS: Deep Model Predictive Control for Visual Servoing</title><description>4th Annual Conference on Robot Learning, CoRL 2020, Cambridge, MA,
USA, November 16 - November 18, 2020 The simplicity of the visual servoing approach makes it an attractive option
for tasks dealing with vision-based control of robots in many real-world
applications. However, attaining precise alignment for unseen environments pose
a challenge to existing visual servoing approaches. While classical approaches
assume a perfect world, the recent data-driven approaches face issues when
generalizing to novel environments. In this paper, we aim to combine the best
of both worlds. We present a deep model predictive visual servoing framework
that can achieve precise alignment with optimal trajectories and can generalize
to novel environments. Our framework consists of a deep network for optical
flow predictions, which are used along with a predictive model to forecast
future optical flow. For generating an optimal set of velocities we present a
control network that can be trained on the fly without any supervision. Through
extensive simulations on photo-realistic indoor settings of the popular Habitat
framework, we show significant performance gain due to the proposed formulation
vis-a-vis recent state-of-the-art methods. Specifically, we show a faster
convergence and an improved performance in trajectory length over recent
approaches.</description><subject>Computer Science - Robotics</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj81uwjAQhH3poaJ9gJ7qF0hYk2zsVL2gtAUkUJFAXCNj71aWUowMjdq3L3-nGc3h03xCPCnIS4MIQ5t-Q5-PFGAOoI25F69vRPvFstmsXuS5ykX01MllIh_cMfQkm7g7pthJjkluwuHHdnJFqY9h9_Ug7th2B3q85UCsP97XzTSbf05mzXie2UqbrHRWmdKj29ZQEZMfbb03msHY00rEBaIirgwCg0bPGqyqK8c1I3vHxUA8X7GX--0-hW-b_tqzRnvRKP4Bc_pCuA</recordid><startdate>20210503</startdate><enddate>20210503</enddate><creator>Katara, Pushkal</creator><creator>Harish, Y V S</creator><creator>Pandya, Harit</creator><creator>Gupta, Abhinav</creator><creator>Sanchawala, Aadil Mehdi</creator><creator>Kumar, Gourav</creator><creator>Bhowmick, Brojeshwar</creator><creator>K, Madhava Krishna</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20210503</creationdate><title>DeepMPCVS: Deep Model Predictive Control for Visual Servoing</title><author>Katara, Pushkal ; Harish, Y V S ; Pandya, Harit ; Gupta, Abhinav ; Sanchawala, Aadil Mehdi ; Kumar, Gourav ; Bhowmick, Brojeshwar ; K, Madhava Krishna</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a678-4ca184d5cb906efed2bdd87f08a4d5eef3551ef6850f075df70a196cf9f5fdcf3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer Science - Robotics</topic><toplevel>online_resources</toplevel><creatorcontrib>Katara, Pushkal</creatorcontrib><creatorcontrib>Harish, Y V S</creatorcontrib><creatorcontrib>Pandya, Harit</creatorcontrib><creatorcontrib>Gupta, Abhinav</creatorcontrib><creatorcontrib>Sanchawala, Aadil Mehdi</creatorcontrib><creatorcontrib>Kumar, Gourav</creatorcontrib><creatorcontrib>Bhowmick, Brojeshwar</creatorcontrib><creatorcontrib>K, Madhava Krishna</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Katara, Pushkal</au><au>Harish, Y V S</au><au>Pandya, Harit</au><au>Gupta, Abhinav</au><au>Sanchawala, Aadil Mehdi</au><au>Kumar, Gourav</au><au>Bhowmick, Brojeshwar</au><au>K, Madhava Krishna</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>DeepMPCVS: Deep Model Predictive Control for Visual Servoing</atitle><date>2021-05-03</date><risdate>2021</risdate><abstract>4th Annual Conference on Robot Learning, CoRL 2020, Cambridge, MA,
USA, November 16 - November 18, 2020 The simplicity of the visual servoing approach makes it an attractive option
for tasks dealing with vision-based control of robots in many real-world
applications. However, attaining precise alignment for unseen environments pose
a challenge to existing visual servoing approaches. While classical approaches
assume a perfect world, the recent data-driven approaches face issues when
generalizing to novel environments. In this paper, we aim to combine the best
of both worlds. We present a deep model predictive visual servoing framework
that can achieve precise alignment with optimal trajectories and can generalize
to novel environments. Our framework consists of a deep network for optical
flow predictions, which are used along with a predictive model to forecast
future optical flow. For generating an optimal set of velocities we present a
control network that can be trained on the fly without any supervision. Through
extensive simulations on photo-realistic indoor settings of the popular Habitat
framework, we show significant performance gain due to the proposed formulation
vis-a-vis recent state-of-the-art methods. Specifically, we show a faster
convergence and an improved performance in trajectory length over recent
approaches.</abstract><doi>10.48550/arxiv.2105.00788</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2105.00788 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2105_00788 |
source | arXiv.org |
subjects | Computer Science - Robotics |
title | DeepMPCVS: Deep Model Predictive Control for Visual Servoing |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-14T10%3A07%3A49IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=DeepMPCVS:%20Deep%20Model%20Predictive%20Control%20for%20Visual%20Servoing&rft.au=Katara,%20Pushkal&rft.date=2021-05-03&rft_id=info:doi/10.48550/arxiv.2105.00788&rft_dat=%3Carxiv_GOX%3E2105_00788%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |