Online Estimation of Articulated Objects with Factor Graphs using Vision and Proprioceptive Sensing

From dishwashers to cabinets, humans interact with articulated objects every day, and for a robot to assist in common manipulation tasks, it must learn a representation of articulation. Recent deep learning learning methods can provide powerful vision-based priors on the affordance of articulated ob...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2023-09
Hauptverfasser: Buchanan, Russell, Röfer, Adrian, Moura, João, Valada, Abhinav, Vijayakumar, Sethu
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Buchanan, Russell
Röfer, Adrian
Moura, João
Valada, Abhinav
Vijayakumar, Sethu
description From dishwashers to cabinets, humans interact with articulated objects every day, and for a robot to assist in common manipulation tasks, it must learn a representation of articulation. Recent deep learning learning methods can provide powerful vision-based priors on the affordance of articulated objects from previous, possibly simulated, experiences. In contrast, many works estimate articulation by observing the object in motion, requiring the robot to already be interacting with the object. In this work, we propose to use the best of both worlds by introducing an online estimation method that merges vision-based affordance predictions from a neural network with interactive kinematic sensing in an analytical model. Our work has the benefit of using vision to predict an articulation model before touching the object, while also being able to update the model quickly from kinematic sensing during the interaction. In this paper, we implement a full system using shared autonomy for robotic opening of articulated objects, in particular objects in which the articulation is not apparent from vision alone. We implemented our system on a real robot and performed several autonomous closed-loop experiments in which the robot had to open a door with unknown joint while estimating the articulation online. Our system achieved an 80% success rate for autonomous opening of unknown articulated objects.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2870189030</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2870189030</sourcerecordid><originalsourceid>FETCH-proquest_journals_28701890303</originalsourceid><addsrcrecordid>eNqNi8sKwjAQAIMgKOo_LHgW0sRHPYq0equgeC0xRt1Sk5rd6u-r4Ad4msPMdERfaZ1M0qlSPTEiqqSUar5Qs5nuC1v4Gr2DjBjvhjF4CBdYRUbb1obdGYpT5SwTvJBvkBvLIcImmuZG0BL6KxyRvpvxZ9jF0EQM1jWMTwd757_FUHQvpiY3-nEgxnl2WG8nTQyP1hGXVWij_6hSpQuZpEuppf6vegPyaEZK</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2870189030</pqid></control><display><type>article</type><title>Online Estimation of Articulated Objects with Factor Graphs using Vision and Proprioceptive Sensing</title><source>Free E- Journals</source><creator>Buchanan, Russell ; Röfer, Adrian ; Moura, João ; Valada, Abhinav ; Vijayakumar, Sethu</creator><creatorcontrib>Buchanan, Russell ; Röfer, Adrian ; Moura, João ; Valada, Abhinav ; Vijayakumar, Sethu</creatorcontrib><description>From dishwashers to cabinets, humans interact with articulated objects every day, and for a robot to assist in common manipulation tasks, it must learn a representation of articulation. Recent deep learning learning methods can provide powerful vision-based priors on the affordance of articulated objects from previous, possibly simulated, experiences. In contrast, many works estimate articulation by observing the object in motion, requiring the robot to already be interacting with the object. In this work, we propose to use the best of both worlds by introducing an online estimation method that merges vision-based affordance predictions from a neural network with interactive kinematic sensing in an analytical model. Our work has the benefit of using vision to predict an articulation model before touching the object, while also being able to update the model quickly from kinematic sensing during the interaction. In this paper, we implement a full system using shared autonomy for robotic opening of articulated objects, in particular objects in which the articulation is not apparent from vision alone. We implemented our system on a real robot and performed several autonomous closed-loop experiments in which the robot had to open a door with unknown joint while estimating the articulation online. Our system achieved an 80% success rate for autonomous opening of unknown articulated objects.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Autonomy ; Closed loops ; Deep learning ; Estimation ; Kinematics ; Mathematical models ; Neural networks ; Robot dynamics ; Robots</subject><ispartof>arXiv.org, 2023-09</ispartof><rights>2023. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Buchanan, Russell</creatorcontrib><creatorcontrib>Röfer, Adrian</creatorcontrib><creatorcontrib>Moura, João</creatorcontrib><creatorcontrib>Valada, Abhinav</creatorcontrib><creatorcontrib>Vijayakumar, Sethu</creatorcontrib><title>Online Estimation of Articulated Objects with Factor Graphs using Vision and Proprioceptive Sensing</title><title>arXiv.org</title><description>From dishwashers to cabinets, humans interact with articulated objects every day, and for a robot to assist in common manipulation tasks, it must learn a representation of articulation. Recent deep learning learning methods can provide powerful vision-based priors on the affordance of articulated objects from previous, possibly simulated, experiences. In contrast, many works estimate articulation by observing the object in motion, requiring the robot to already be interacting with the object. In this work, we propose to use the best of both worlds by introducing an online estimation method that merges vision-based affordance predictions from a neural network with interactive kinematic sensing in an analytical model. Our work has the benefit of using vision to predict an articulation model before touching the object, while also being able to update the model quickly from kinematic sensing during the interaction. In this paper, we implement a full system using shared autonomy for robotic opening of articulated objects, in particular objects in which the articulation is not apparent from vision alone. We implemented our system on a real robot and performed several autonomous closed-loop experiments in which the robot had to open a door with unknown joint while estimating the articulation online. Our system achieved an 80% success rate for autonomous opening of unknown articulated objects.</description><subject>Autonomy</subject><subject>Closed loops</subject><subject>Deep learning</subject><subject>Estimation</subject><subject>Kinematics</subject><subject>Mathematical models</subject><subject>Neural networks</subject><subject>Robot dynamics</subject><subject>Robots</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNqNi8sKwjAQAIMgKOo_LHgW0sRHPYq0equgeC0xRt1Sk5rd6u-r4Ad4msPMdERfaZ1M0qlSPTEiqqSUar5Qs5nuC1v4Gr2DjBjvhjF4CBdYRUbb1obdGYpT5SwTvJBvkBvLIcImmuZG0BL6KxyRvpvxZ9jF0EQM1jWMTwd757_FUHQvpiY3-nEgxnl2WG8nTQyP1hGXVWij_6hSpQuZpEuppf6vegPyaEZK</recordid><startdate>20230928</startdate><enddate>20230928</enddate><creator>Buchanan, Russell</creator><creator>Röfer, Adrian</creator><creator>Moura, João</creator><creator>Valada, Abhinav</creator><creator>Vijayakumar, Sethu</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20230928</creationdate><title>Online Estimation of Articulated Objects with Factor Graphs using Vision and Proprioceptive Sensing</title><author>Buchanan, Russell ; Röfer, Adrian ; Moura, João ; Valada, Abhinav ; Vijayakumar, Sethu</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_28701890303</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Autonomy</topic><topic>Closed loops</topic><topic>Deep learning</topic><topic>Estimation</topic><topic>Kinematics</topic><topic>Mathematical models</topic><topic>Neural networks</topic><topic>Robot dynamics</topic><topic>Robots</topic><toplevel>online_resources</toplevel><creatorcontrib>Buchanan, Russell</creatorcontrib><creatorcontrib>Röfer, Adrian</creatorcontrib><creatorcontrib>Moura, João</creatorcontrib><creatorcontrib>Valada, Abhinav</creatorcontrib><creatorcontrib>Vijayakumar, Sethu</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Buchanan, Russell</au><au>Röfer, Adrian</au><au>Moura, João</au><au>Valada, Abhinav</au><au>Vijayakumar, Sethu</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Online Estimation of Articulated Objects with Factor Graphs using Vision and Proprioceptive Sensing</atitle><jtitle>arXiv.org</jtitle><date>2023-09-28</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>From dishwashers to cabinets, humans interact with articulated objects every day, and for a robot to assist in common manipulation tasks, it must learn a representation of articulation. Recent deep learning learning methods can provide powerful vision-based priors on the affordance of articulated objects from previous, possibly simulated, experiences. In contrast, many works estimate articulation by observing the object in motion, requiring the robot to already be interacting with the object. In this work, we propose to use the best of both worlds by introducing an online estimation method that merges vision-based affordance predictions from a neural network with interactive kinematic sensing in an analytical model. Our work has the benefit of using vision to predict an articulation model before touching the object, while also being able to update the model quickly from kinematic sensing during the interaction. In this paper, we implement a full system using shared autonomy for robotic opening of articulated objects, in particular objects in which the articulation is not apparent from vision alone. We implemented our system on a real robot and performed several autonomous closed-loop experiments in which the robot had to open a door with unknown joint while estimating the articulation online. Our system achieved an 80% success rate for autonomous opening of unknown articulated objects.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2023-09
issn 2331-8422
language eng
recordid cdi_proquest_journals_2870189030
source Free E- Journals
subjects Autonomy
Closed loops
Deep learning
Estimation
Kinematics
Mathematical models
Neural networks
Robot dynamics
Robots
title Online Estimation of Articulated Objects with Factor Graphs using Vision and Proprioceptive Sensing
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-28T04%3A36%3A23IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Online%20Estimation%20of%20Articulated%20Objects%20with%20Factor%20Graphs%20using%20Vision%20and%20Proprioceptive%20Sensing&rft.jtitle=arXiv.org&rft.au=Buchanan,%20Russell&rft.date=2023-09-28&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2870189030%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2870189030&rft_id=info:pmid/&rfr_iscdi=true