Learning Articulated Constraints From a One-Shot Demonstration for Robot Manipulation Planning

Robots manipulating in domestic environments generally need to interact with articulated objects, such as doors, drawers, laptops and swivel chairs. The rigid bodies that make up these objects are connected by a revolute pair or a prismatic pair. Robots are expected to learn and understand the objec...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2019, Vol.7, p.172584-172596
Hauptverfasser: Liu, Yizhou, Zha, Fusheng, Sun, Lining, Li, Jingxuan, Li, Mantian, Wang, Xin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 172596
container_issue
container_start_page 172584
container_title IEEE access
container_volume 7
creator Liu, Yizhou
Zha, Fusheng
Sun, Lining
Li, Jingxuan
Li, Mantian
Wang, Xin
description Robots manipulating in domestic environments generally need to interact with articulated objects, such as doors, drawers, laptops and swivel chairs. The rigid bodies that make up these objects are connected by a revolute pair or a prismatic pair. Robots are expected to learn and understand the objects' articulated constraints with a simple interaction method. In this way, the autonomy of robot manipulation will be greatly improved in an environment with unstructured constraints. In this paper, a method is proposed to obtain the articulated objects' constraint model by learning from a one-shot continuous visual demonstration which contains multistep movements, and this enables human teacher to continuously demonstrate several tasks at once without manual segmentation. At the end of this paper, a six-degree-of-freedom robot uses the constraint model obtained by demonstration learning to achieve manipulation planning of various tasks based on the AG-CBiRRT algorithm.
doi_str_mv 10.1109/ACCESS.2019.2953894
format Article
fullrecord <record><control><sourceid>proquest_ieee_</sourceid><recordid>TN_cdi_ieee_primary_8903534</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8903534</ieee_id><doaj_id>oai_doaj_org_article_dc0df8d053f24d1f8a9096a2ebd1495f</doaj_id><sourcerecordid>2455644136</sourcerecordid><originalsourceid>FETCH-LOGICAL-c408t-ee60d2be7faf3d9720a19ddf71a84ad261b4410ae91bf8ce9d084fc5b93938d33</originalsourceid><addsrcrecordid>eNpNkc1OwzAQhCMEEgj6BL1E4pziv6T2sQoFKhUVUbhibeJ1SdXaxUkPvD0OQRW-rDW7M17rS5IxJRNKibqbleV8vZ4wQtWEqZxLJc6SK0YLlfGcF-f_7pfJqG23JB4ZpXx6lXwsEYJr3Cadha6pjzvo0KSld20XoHFdmz4Ev08hXTnM1p--S-9xP3S7xrvU-pC--irqz-CaQ-_v5ZcduD71JrmwsGtx9Fevk_eH-Vv5lC1Xj4tytsxqQWSXIRbEsAqnFiw3asoIUGWMnVKQAgwraCUEJYCKVlbWqAyRwtZ5pbji0nB-nSyGXONhqw-h2UP41h4a_Sv4sNHQ_2-H2tTEWGlIzi0ThloJiqgCGFaGCpXbmHU7ZB2C_zpi2-mtPwYX19dM5HkRN-FFnOLDVB182wa0p1cp0T0XPXDRPRf9xyW6xoOrQcSTQyoS6Qj-A2BeioQ</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2455644136</pqid></control><display><type>article</type><title>Learning Articulated Constraints From a One-Shot Demonstration for Robot Manipulation Planning</title><source>IEEE Open Access Journals</source><source>DOAJ Directory of Open Access Journals</source><source>Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals</source><creator>Liu, Yizhou ; Zha, Fusheng ; Sun, Lining ; Li, Jingxuan ; Li, Mantian ; Wang, Xin</creator><creatorcontrib>Liu, Yizhou ; Zha, Fusheng ; Sun, Lining ; Li, Jingxuan ; Li, Mantian ; Wang, Xin</creatorcontrib><description>Robots manipulating in domestic environments generally need to interact with articulated objects, such as doors, drawers, laptops and swivel chairs. The rigid bodies that make up these objects are connected by a revolute pair or a prismatic pair. Robots are expected to learn and understand the objects' articulated constraints with a simple interaction method. In this way, the autonomy of robot manipulation will be greatly improved in an environment with unstructured constraints. In this paper, a method is proposed to obtain the articulated objects' constraint model by learning from a one-shot continuous visual demonstration which contains multistep movements, and this enables human teacher to continuously demonstrate several tasks at once without manual segmentation. At the end of this paper, a six-degree-of-freedom robot uses the constraint model obtained by demonstration learning to achieve manipulation planning of various tasks based on the AG-CBiRRT algorithm.</description><identifier>ISSN: 2169-3536</identifier><identifier>EISSN: 2169-3536</identifier><identifier>DOI: 10.1109/ACCESS.2019.2953894</identifier><identifier>CODEN: IAECCG</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Algorithms ; Constraint modelling ; Hidden Markov models ; Learning ; Learning from demonstration ; Manifolds ; Motion segmentation ; movements segmentation ; Planning ; Rigid structures ; robot manipulation planning ; Robots ; Segmentation ; Task analysis ; task space region</subject><ispartof>IEEE access, 2019, Vol.7, p.172584-172596</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c408t-ee60d2be7faf3d9720a19ddf71a84ad261b4410ae91bf8ce9d084fc5b93938d33</citedby><cites>FETCH-LOGICAL-c408t-ee60d2be7faf3d9720a19ddf71a84ad261b4410ae91bf8ce9d084fc5b93938d33</cites><orcidid>0000-0003-4689-3484 ; 0000-0003-2659-6677 ; 0000-0001-5327-9496 ; 0000-0001-9695-1940 ; 0000-0001-5557-3509 ; 0000-0002-9526-3043</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8903534$$EHTML$$P50$$Gieee$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,864,2102,4024,27633,27923,27924,27925,54933</link.rule.ids></links><search><creatorcontrib>Liu, Yizhou</creatorcontrib><creatorcontrib>Zha, Fusheng</creatorcontrib><creatorcontrib>Sun, Lining</creatorcontrib><creatorcontrib>Li, Jingxuan</creatorcontrib><creatorcontrib>Li, Mantian</creatorcontrib><creatorcontrib>Wang, Xin</creatorcontrib><title>Learning Articulated Constraints From a One-Shot Demonstration for Robot Manipulation Planning</title><title>IEEE access</title><addtitle>Access</addtitle><description>Robots manipulating in domestic environments generally need to interact with articulated objects, such as doors, drawers, laptops and swivel chairs. The rigid bodies that make up these objects are connected by a revolute pair or a prismatic pair. Robots are expected to learn and understand the objects' articulated constraints with a simple interaction method. In this way, the autonomy of robot manipulation will be greatly improved in an environment with unstructured constraints. In this paper, a method is proposed to obtain the articulated objects' constraint model by learning from a one-shot continuous visual demonstration which contains multistep movements, and this enables human teacher to continuously demonstrate several tasks at once without manual segmentation. At the end of this paper, a six-degree-of-freedom robot uses the constraint model obtained by demonstration learning to achieve manipulation planning of various tasks based on the AG-CBiRRT algorithm.</description><subject>Algorithms</subject><subject>Constraint modelling</subject><subject>Hidden Markov models</subject><subject>Learning</subject><subject>Learning from demonstration</subject><subject>Manifolds</subject><subject>Motion segmentation</subject><subject>movements segmentation</subject><subject>Planning</subject><subject>Rigid structures</subject><subject>robot manipulation planning</subject><subject>Robots</subject><subject>Segmentation</subject><subject>Task analysis</subject><subject>task space region</subject><issn>2169-3536</issn><issn>2169-3536</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>ESBDL</sourceid><sourceid>RIE</sourceid><sourceid>DOA</sourceid><recordid>eNpNkc1OwzAQhCMEEgj6BL1E4pziv6T2sQoFKhUVUbhibeJ1SdXaxUkPvD0OQRW-rDW7M17rS5IxJRNKibqbleV8vZ4wQtWEqZxLJc6SK0YLlfGcF-f_7pfJqG23JB4ZpXx6lXwsEYJr3Cadha6pjzvo0KSld20XoHFdmz4Ev08hXTnM1p--S-9xP3S7xrvU-pC--irqz-CaQ-_v5ZcduD71JrmwsGtx9Fevk_eH-Vv5lC1Xj4tytsxqQWSXIRbEsAqnFiw3asoIUGWMnVKQAgwraCUEJYCKVlbWqAyRwtZ5pbji0nB-nSyGXONhqw-h2UP41h4a_Sv4sNHQ_2-H2tTEWGlIzi0ThloJiqgCGFaGCpXbmHU7ZB2C_zpi2-mtPwYX19dM5HkRN-FFnOLDVB182wa0p1cp0T0XPXDRPRf9xyW6xoOrQcSTQyoS6Qj-A2BeioQ</recordid><startdate>2019</startdate><enddate>2019</enddate><creator>Liu, Yizhou</creator><creator>Zha, Fusheng</creator><creator>Sun, Lining</creator><creator>Li, Jingxuan</creator><creator>Li, Mantian</creator><creator>Wang, Xin</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>ESBDL</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7SR</scope><scope>8BQ</scope><scope>8FD</scope><scope>JG9</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0003-4689-3484</orcidid><orcidid>https://orcid.org/0000-0003-2659-6677</orcidid><orcidid>https://orcid.org/0000-0001-5327-9496</orcidid><orcidid>https://orcid.org/0000-0001-9695-1940</orcidid><orcidid>https://orcid.org/0000-0001-5557-3509</orcidid><orcidid>https://orcid.org/0000-0002-9526-3043</orcidid></search><sort><creationdate>2019</creationdate><title>Learning Articulated Constraints From a One-Shot Demonstration for Robot Manipulation Planning</title><author>Liu, Yizhou ; Zha, Fusheng ; Sun, Lining ; Li, Jingxuan ; Li, Mantian ; Wang, Xin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c408t-ee60d2be7faf3d9720a19ddf71a84ad261b4410ae91bf8ce9d084fc5b93938d33</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Algorithms</topic><topic>Constraint modelling</topic><topic>Hidden Markov models</topic><topic>Learning</topic><topic>Learning from demonstration</topic><topic>Manifolds</topic><topic>Motion segmentation</topic><topic>movements segmentation</topic><topic>Planning</topic><topic>Rigid structures</topic><topic>robot manipulation planning</topic><topic>Robots</topic><topic>Segmentation</topic><topic>Task analysis</topic><topic>task space region</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Liu, Yizhou</creatorcontrib><creatorcontrib>Zha, Fusheng</creatorcontrib><creatorcontrib>Sun, Lining</creatorcontrib><creatorcontrib>Li, Jingxuan</creatorcontrib><creatorcontrib>Li, Mantian</creatorcontrib><creatorcontrib>Wang, Xin</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE Open Access Journals</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>IEEE access</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Liu, Yizhou</au><au>Zha, Fusheng</au><au>Sun, Lining</au><au>Li, Jingxuan</au><au>Li, Mantian</au><au>Wang, Xin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Learning Articulated Constraints From a One-Shot Demonstration for Robot Manipulation Planning</atitle><jtitle>IEEE access</jtitle><stitle>Access</stitle><date>2019</date><risdate>2019</risdate><volume>7</volume><spage>172584</spage><epage>172596</epage><pages>172584-172596</pages><issn>2169-3536</issn><eissn>2169-3536</eissn><coden>IAECCG</coden><abstract>Robots manipulating in domestic environments generally need to interact with articulated objects, such as doors, drawers, laptops and swivel chairs. The rigid bodies that make up these objects are connected by a revolute pair or a prismatic pair. Robots are expected to learn and understand the objects' articulated constraints with a simple interaction method. In this way, the autonomy of robot manipulation will be greatly improved in an environment with unstructured constraints. In this paper, a method is proposed to obtain the articulated objects' constraint model by learning from a one-shot continuous visual demonstration which contains multistep movements, and this enables human teacher to continuously demonstrate several tasks at once without manual segmentation. At the end of this paper, a six-degree-of-freedom robot uses the constraint model obtained by demonstration learning to achieve manipulation planning of various tasks based on the AG-CBiRRT algorithm.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/ACCESS.2019.2953894</doi><tpages>13</tpages><orcidid>https://orcid.org/0000-0003-4689-3484</orcidid><orcidid>https://orcid.org/0000-0003-2659-6677</orcidid><orcidid>https://orcid.org/0000-0001-5327-9496</orcidid><orcidid>https://orcid.org/0000-0001-9695-1940</orcidid><orcidid>https://orcid.org/0000-0001-5557-3509</orcidid><orcidid>https://orcid.org/0000-0002-9526-3043</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2169-3536
ispartof IEEE access, 2019, Vol.7, p.172584-172596
issn 2169-3536
2169-3536
language eng
recordid cdi_ieee_primary_8903534
source IEEE Open Access Journals; DOAJ Directory of Open Access Journals; Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals
subjects Algorithms
Constraint modelling
Hidden Markov models
Learning
Learning from demonstration
Manifolds
Motion segmentation
movements segmentation
Planning
Rigid structures
robot manipulation planning
Robots
Segmentation
Task analysis
task space region
title Learning Articulated Constraints From a One-Shot Demonstration for Robot Manipulation Planning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-26T18%3A15%3A11IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Learning%20Articulated%20Constraints%20From%20a%20One-Shot%20Demonstration%20for%20Robot%20Manipulation%20Planning&rft.jtitle=IEEE%20access&rft.au=Liu,%20Yizhou&rft.date=2019&rft.volume=7&rft.spage=172584&rft.epage=172596&rft.pages=172584-172596&rft.issn=2169-3536&rft.eissn=2169-3536&rft.coden=IAECCG&rft_id=info:doi/10.1109/ACCESS.2019.2953894&rft_dat=%3Cproquest_ieee_%3E2455644136%3C/proquest_ieee_%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2455644136&rft_id=info:pmid/&rft_ieee_id=8903534&rft_doaj_id=oai_doaj_org_article_dc0df8d053f24d1f8a9096a2ebd1495f&rfr_iscdi=true