Articulated Object Interaction in Unknown Scenes with Whole-Body Mobile Manipulation

A kitchen assistant needs to operate human-scale objects, such as cabinets and ovens, in unmapped environments with dynamic obstacles. Autonomous interactions in such environments require integrating dexterous manipulation and fluid mobility. While mobile manipulators in different form factors provi...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2022-03
Hauptverfasser: Mittal, Mayank, Hoeller, David, Farshidian, Farbod, Hutter, Marco, Garg, Animesh
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Mittal, Mayank
Hoeller, David
Farshidian, Farbod
Hutter, Marco
Garg, Animesh
description A kitchen assistant needs to operate human-scale objects, such as cabinets and ovens, in unmapped environments with dynamic obstacles. Autonomous interactions in such environments require integrating dexterous manipulation and fluid mobility. While mobile manipulators in different form factors provide an extended workspace, their real-world adoption has been limited. Executing a high-level task for general objects requires a perceptual understanding of the object as well as adaptive whole-body control among dynamic obstacles. In this paper, we propose a two-stage architecture for autonomous interaction with large articulated objects in unknown environments. The first stage, object-centric planner, only focuses on the object to provide an action-conditional sequence of states for manipulation using RGB-D data. The second stage, agent-centric planner, formulates the whole-body motion control as an optimal control problem that ensures safe tracking of the generated plan, even in scenes with moving obstacles. We show that the proposed pipeline can handle complex static and dynamic kitchen settings for both wheel-based and legged mobile manipulators. Compared to other agent-centric planners, our proposed planner achieves a higher success rate and a lower execution time. We also perform hardware tests on a legged mobile manipulator to interact with various articulated objects in a kitchen. For additional material, please check: www.pair.toronto.edu/articulated-mm/.
doi_str_mv 10.48550/arxiv.2103.10534
format Article
fullrecord <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2103_10534</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2503515649</sourcerecordid><originalsourceid>FETCH-LOGICAL-a954-39cc7db0bfafc4328c9ed7f8f474d3f270c4b4e08b69eea6604fc8d0169db8743</originalsourceid><addsrcrecordid>eNotj0tPAjEYRRsTEwnyA1zZxPVgp4-ZzhKJDxIICzEuJ318DcWxxU4R-fcCurqbe0_uQeimJGMuhSD3Kv347zEtCRuXRDB-gQaUsbKQnNIrNOr7DSGEVjUVgg3QapKyN7tOZbB4qTdgMp6FDEmZ7GPAPuC38BHiPuBXAwF6vPd5jd_XsYPiIdoDXkTtO8ALFfz2xDmurtGlU10Po_8cotXT42r6UsyXz7PpZF6oRvCCNcbUVhPtlDOcUWkasLWTjtfcMkdrYrjmQKSuGgBVVYQ7Iy0pq8ZqWXM2RLd_2LNyu03-U6VDe1Jvz-rHxt1fY5vi1w763G7iLoXjp5YKwkQpKt6wX2rZXM4</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2503515649</pqid></control><display><type>article</type><title>Articulated Object Interaction in Unknown Scenes with Whole-Body Mobile Manipulation</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Mittal, Mayank ; Hoeller, David ; Farshidian, Farbod ; Hutter, Marco ; Garg, Animesh</creator><creatorcontrib>Mittal, Mayank ; Hoeller, David ; Farshidian, Farbod ; Hutter, Marco ; Garg, Animesh</creatorcontrib><description>A kitchen assistant needs to operate human-scale objects, such as cabinets and ovens, in unmapped environments with dynamic obstacles. Autonomous interactions in such environments require integrating dexterous manipulation and fluid mobility. While mobile manipulators in different form factors provide an extended workspace, their real-world adoption has been limited. Executing a high-level task for general objects requires a perceptual understanding of the object as well as adaptive whole-body control among dynamic obstacles. In this paper, we propose a two-stage architecture for autonomous interaction with large articulated objects in unknown environments. The first stage, object-centric planner, only focuses on the object to provide an action-conditional sequence of states for manipulation using RGB-D data. The second stage, agent-centric planner, formulates the whole-body motion control as an optimal control problem that ensures safe tracking of the generated plan, even in scenes with moving obstacles. We show that the proposed pipeline can handle complex static and dynamic kitchen settings for both wheel-based and legged mobile manipulators. Compared to other agent-centric planners, our proposed planner achieves a higher success rate and a lower execution time. We also perform hardware tests on a legged mobile manipulator to interact with various articulated objects in a kitchen. For additional material, please check: www.pair.toronto.edu/articulated-mm/.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2103.10534</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Adaptive control ; Barriers ; Cabinets ; Computer Science - Artificial Intelligence ; Computer Science - Robotics ; Control methods ; Kitchens ; Ovens ; Unknown environments</subject><ispartof>arXiv.org, 2022-03</ispartof><rights>2022. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,784,885,27925</link.rule.ids><backlink>$$Uhttps://doi.org/10.1109/IROS47612.2022.9981779$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.48550/arXiv.2103.10534$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Mittal, Mayank</creatorcontrib><creatorcontrib>Hoeller, David</creatorcontrib><creatorcontrib>Farshidian, Farbod</creatorcontrib><creatorcontrib>Hutter, Marco</creatorcontrib><creatorcontrib>Garg, Animesh</creatorcontrib><title>Articulated Object Interaction in Unknown Scenes with Whole-Body Mobile Manipulation</title><title>arXiv.org</title><description>A kitchen assistant needs to operate human-scale objects, such as cabinets and ovens, in unmapped environments with dynamic obstacles. Autonomous interactions in such environments require integrating dexterous manipulation and fluid mobility. While mobile manipulators in different form factors provide an extended workspace, their real-world adoption has been limited. Executing a high-level task for general objects requires a perceptual understanding of the object as well as adaptive whole-body control among dynamic obstacles. In this paper, we propose a two-stage architecture for autonomous interaction with large articulated objects in unknown environments. The first stage, object-centric planner, only focuses on the object to provide an action-conditional sequence of states for manipulation using RGB-D data. The second stage, agent-centric planner, formulates the whole-body motion control as an optimal control problem that ensures safe tracking of the generated plan, even in scenes with moving obstacles. We show that the proposed pipeline can handle complex static and dynamic kitchen settings for both wheel-based and legged mobile manipulators. Compared to other agent-centric planners, our proposed planner achieves a higher success rate and a lower execution time. We also perform hardware tests on a legged mobile manipulator to interact with various articulated objects in a kitchen. For additional material, please check: www.pair.toronto.edu/articulated-mm/.</description><subject>Adaptive control</subject><subject>Barriers</subject><subject>Cabinets</subject><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Robotics</subject><subject>Control methods</subject><subject>Kitchens</subject><subject>Ovens</subject><subject>Unknown environments</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GOX</sourceid><recordid>eNotj0tPAjEYRRsTEwnyA1zZxPVgp4-ZzhKJDxIICzEuJ318DcWxxU4R-fcCurqbe0_uQeimJGMuhSD3Kv347zEtCRuXRDB-gQaUsbKQnNIrNOr7DSGEVjUVgg3QapKyN7tOZbB4qTdgMp6FDEmZ7GPAPuC38BHiPuBXAwF6vPd5jd_XsYPiIdoDXkTtO8ALFfz2xDmurtGlU10Po_8cotXT42r6UsyXz7PpZF6oRvCCNcbUVhPtlDOcUWkasLWTjtfcMkdrYrjmQKSuGgBVVYQ7Iy0pq8ZqWXM2RLd_2LNyu03-U6VDe1Jvz-rHxt1fY5vi1w763G7iLoXjp5YKwkQpKt6wX2rZXM4</recordid><startdate>20220318</startdate><enddate>20220318</enddate><creator>Mittal, Mayank</creator><creator>Hoeller, David</creator><creator>Farshidian, Farbod</creator><creator>Hutter, Marco</creator><creator>Garg, Animesh</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220318</creationdate><title>Articulated Object Interaction in Unknown Scenes with Whole-Body Mobile Manipulation</title><author>Mittal, Mayank ; Hoeller, David ; Farshidian, Farbod ; Hutter, Marco ; Garg, Animesh</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a954-39cc7db0bfafc4328c9ed7f8f474d3f270c4b4e08b69eea6604fc8d0169db8743</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Adaptive control</topic><topic>Barriers</topic><topic>Cabinets</topic><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Robotics</topic><topic>Control methods</topic><topic>Kitchens</topic><topic>Ovens</topic><topic>Unknown environments</topic><toplevel>online_resources</toplevel><creatorcontrib>Mittal, Mayank</creatorcontrib><creatorcontrib>Hoeller, David</creatorcontrib><creatorcontrib>Farshidian, Farbod</creatorcontrib><creatorcontrib>Hutter, Marco</creatorcontrib><creatorcontrib>Garg, Animesh</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Mittal, Mayank</au><au>Hoeller, David</au><au>Farshidian, Farbod</au><au>Hutter, Marco</au><au>Garg, Animesh</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Articulated Object Interaction in Unknown Scenes with Whole-Body Mobile Manipulation</atitle><jtitle>arXiv.org</jtitle><date>2022-03-18</date><risdate>2022</risdate><eissn>2331-8422</eissn><abstract>A kitchen assistant needs to operate human-scale objects, such as cabinets and ovens, in unmapped environments with dynamic obstacles. Autonomous interactions in such environments require integrating dexterous manipulation and fluid mobility. While mobile manipulators in different form factors provide an extended workspace, their real-world adoption has been limited. Executing a high-level task for general objects requires a perceptual understanding of the object as well as adaptive whole-body control among dynamic obstacles. In this paper, we propose a two-stage architecture for autonomous interaction with large articulated objects in unknown environments. The first stage, object-centric planner, only focuses on the object to provide an action-conditional sequence of states for manipulation using RGB-D data. The second stage, agent-centric planner, formulates the whole-body motion control as an optimal control problem that ensures safe tracking of the generated plan, even in scenes with moving obstacles. We show that the proposed pipeline can handle complex static and dynamic kitchen settings for both wheel-based and legged mobile manipulators. Compared to other agent-centric planners, our proposed planner achieves a higher success rate and a lower execution time. We also perform hardware tests on a legged mobile manipulator to interact with various articulated objects in a kitchen. For additional material, please check: www.pair.toronto.edu/articulated-mm/.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2103.10534</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2022-03
issn 2331-8422
language eng
recordid cdi_arxiv_primary_2103_10534
source arXiv.org; Free E- Journals
subjects Adaptive control
Barriers
Cabinets
Computer Science - Artificial Intelligence
Computer Science - Robotics
Control methods
Kitchens
Ovens
Unknown environments
title Articulated Object Interaction in Unknown Scenes with Whole-Body Mobile Manipulation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-27T13%3A06%3A25IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Articulated%20Object%20Interaction%20in%20Unknown%20Scenes%20with%20Whole-Body%20Mobile%20Manipulation&rft.jtitle=arXiv.org&rft.au=Mittal,%20Mayank&rft.date=2022-03-18&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2103.10534&rft_dat=%3Cproquest_arxiv%3E2503515649%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2503515649&rft_id=info:pmid/&rfr_iscdi=true