Mobile Manipulation Leveraging Multiple Views
While both navigation and manipulation are challenging topics in isolation, many tasks require the ability to both navigate and manipulate in concert. To this end, we propose a mobile manipulation system that leverages novel navigation and shape completion methods to manipulate an object with a mobi...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Watkins, David Allen, Peter K Maia, Henrique Seshadri, Madhavan Sanabria, Jonathan Waytowich, Nicholas Varley, Jacob |
description | While both navigation and manipulation are challenging topics in isolation,
many tasks require the ability to both navigate and manipulate in concert. To
this end, we propose a mobile manipulation system that leverages novel
navigation and shape completion methods to manipulate an object with a mobile
robot. Our system utilizes uncertainty in the initial estimation of a
manipulation target to calculate a predicted next-best-view. Without the need
of localization, the robot then uses the predicted panoramic view at the
next-best-view location to navigate to the desired location, capture a second
view of the object, create a new model that predicts the shape of object more
accurately than a single image alone, and uses this model for grasp planning.
We show that the system is highly effective for mobile manipulation tasks
through simulation experiments using real world data, as well as ablations on
each component of our system. |
doi_str_mv | 10.48550/arxiv.2110.00717 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2110_00717</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2110_00717</sourcerecordid><originalsourceid>FETCH-LOGICAL-a677-198649a05eeab1279d0f5fd245db3e5ab3ea8458d00d7c4be8cda0e7f85aafc3</originalsourceid><addsrcrecordid>eNotjssKwjAURLNxIeoHuLI_UL1pE5MupfiCFheK23Lb3EigtqW-_9762MzAcBgOY2MOU6GlhBm2T3efBrwbABRXfeande5K8lKsXHMr8erqykvoTi2eXHXy0lt5dU0HHB09LkPWs1heaPTvAduvlod44ye79TZeJD7OlfJ5pOciQpBEmPNARQastCYQ0uQhSewCtZDaABhViJx0YRBIWS0RbREO2OT3-tXNmtadsX1lH-3sqx2-AVXbPa8</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Mobile Manipulation Leveraging Multiple Views</title><source>arXiv.org</source><creator>Watkins, David ; Allen, Peter K ; Maia, Henrique ; Seshadri, Madhavan ; Sanabria, Jonathan ; Waytowich, Nicholas ; Varley, Jacob</creator><creatorcontrib>Watkins, David ; Allen, Peter K ; Maia, Henrique ; Seshadri, Madhavan ; Sanabria, Jonathan ; Waytowich, Nicholas ; Varley, Jacob</creatorcontrib><description>While both navigation and manipulation are challenging topics in isolation,
many tasks require the ability to both navigate and manipulate in concert. To
this end, we propose a mobile manipulation system that leverages novel
navigation and shape completion methods to manipulate an object with a mobile
robot. Our system utilizes uncertainty in the initial estimation of a
manipulation target to calculate a predicted next-best-view. Without the need
of localization, the robot then uses the predicted panoramic view at the
next-best-view location to navigate to the desired location, capture a second
view of the object, create a new model that predicts the shape of object more
accurately than a single image alone, and uses this model for grasp planning.
We show that the system is highly effective for mobile manipulation tasks
through simulation experiments using real world data, as well as ablations on
each component of our system.</description><identifier>DOI: 10.48550/arxiv.2110.00717</identifier><language>eng</language><subject>Computer Science - Robotics</subject><creationdate>2021-10</creationdate><rights>http://creativecommons.org/licenses/by-nc-nd/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,781,886</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2110.00717$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2110.00717$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Watkins, David</creatorcontrib><creatorcontrib>Allen, Peter K</creatorcontrib><creatorcontrib>Maia, Henrique</creatorcontrib><creatorcontrib>Seshadri, Madhavan</creatorcontrib><creatorcontrib>Sanabria, Jonathan</creatorcontrib><creatorcontrib>Waytowich, Nicholas</creatorcontrib><creatorcontrib>Varley, Jacob</creatorcontrib><title>Mobile Manipulation Leveraging Multiple Views</title><description>While both navigation and manipulation are challenging topics in isolation,
many tasks require the ability to both navigate and manipulate in concert. To
this end, we propose a mobile manipulation system that leverages novel
navigation and shape completion methods to manipulate an object with a mobile
robot. Our system utilizes uncertainty in the initial estimation of a
manipulation target to calculate a predicted next-best-view. Without the need
of localization, the robot then uses the predicted panoramic view at the
next-best-view location to navigate to the desired location, capture a second
view of the object, create a new model that predicts the shape of object more
accurately than a single image alone, and uses this model for grasp planning.
We show that the system is highly effective for mobile manipulation tasks
through simulation experiments using real world data, as well as ablations on
each component of our system.</description><subject>Computer Science - Robotics</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotjssKwjAURLNxIeoHuLI_UL1pE5MupfiCFheK23Lb3EigtqW-_9762MzAcBgOY2MOU6GlhBm2T3efBrwbABRXfeande5K8lKsXHMr8erqykvoTi2eXHXy0lt5dU0HHB09LkPWs1heaPTvAduvlod44ye79TZeJD7OlfJ5pOciQpBEmPNARQastCYQ0uQhSewCtZDaABhViJx0YRBIWS0RbREO2OT3-tXNmtadsX1lH-3sqx2-AVXbPa8</recordid><startdate>20211001</startdate><enddate>20211001</enddate><creator>Watkins, David</creator><creator>Allen, Peter K</creator><creator>Maia, Henrique</creator><creator>Seshadri, Madhavan</creator><creator>Sanabria, Jonathan</creator><creator>Waytowich, Nicholas</creator><creator>Varley, Jacob</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20211001</creationdate><title>Mobile Manipulation Leveraging Multiple Views</title><author>Watkins, David ; Allen, Peter K ; Maia, Henrique ; Seshadri, Madhavan ; Sanabria, Jonathan ; Waytowich, Nicholas ; Varley, Jacob</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a677-198649a05eeab1279d0f5fd245db3e5ab3ea8458d00d7c4be8cda0e7f85aafc3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer Science - Robotics</topic><toplevel>online_resources</toplevel><creatorcontrib>Watkins, David</creatorcontrib><creatorcontrib>Allen, Peter K</creatorcontrib><creatorcontrib>Maia, Henrique</creatorcontrib><creatorcontrib>Seshadri, Madhavan</creatorcontrib><creatorcontrib>Sanabria, Jonathan</creatorcontrib><creatorcontrib>Waytowich, Nicholas</creatorcontrib><creatorcontrib>Varley, Jacob</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Watkins, David</au><au>Allen, Peter K</au><au>Maia, Henrique</au><au>Seshadri, Madhavan</au><au>Sanabria, Jonathan</au><au>Waytowich, Nicholas</au><au>Varley, Jacob</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Mobile Manipulation Leveraging Multiple Views</atitle><date>2021-10-01</date><risdate>2021</risdate><abstract>While both navigation and manipulation are challenging topics in isolation,
many tasks require the ability to both navigate and manipulate in concert. To
this end, we propose a mobile manipulation system that leverages novel
navigation and shape completion methods to manipulate an object with a mobile
robot. Our system utilizes uncertainty in the initial estimation of a
manipulation target to calculate a predicted next-best-view. Without the need
of localization, the robot then uses the predicted panoramic view at the
next-best-view location to navigate to the desired location, capture a second
view of the object, create a new model that predicts the shape of object more
accurately than a single image alone, and uses this model for grasp planning.
We show that the system is highly effective for mobile manipulation tasks
through simulation experiments using real world data, as well as ablations on
each component of our system.</abstract><doi>10.48550/arxiv.2110.00717</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2110.00717 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2110_00717 |
source | arXiv.org |
subjects | Computer Science - Robotics |
title | Mobile Manipulation Leveraging Multiple Views |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-18T09%3A58%3A38IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Mobile%20Manipulation%20Leveraging%20Multiple%20Views&rft.au=Watkins,%20David&rft.date=2021-10-01&rft_id=info:doi/10.48550/arxiv.2110.00717&rft_dat=%3Carxiv_GOX%3E2110_00717%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |