Optimized vision-based robot motion planning from multiple demonstrations

This paper combines workspace models with optimization techniques to simultaneously address whole-arm collision avoidance, joint limits and camera field of view (FOV) limits for vision-based motion planning of a robot manipulator. A small number of user demonstrations are used to generate a feasible...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Autonomous robots 2018-08, Vol.42 (6), p.1117-1132
Hauptverfasser: Shen, Tiantian, Radmard, Sina, Chan, Ambrose, Croft, Elizabeth A., Chesi, Graziano
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 1132
container_issue 6
container_start_page 1117
container_title Autonomous robots
container_volume 42
creator Shen, Tiantian
Radmard, Sina
Chan, Ambrose
Croft, Elizabeth A.
Chesi, Graziano
description This paper combines workspace models with optimization techniques to simultaneously address whole-arm collision avoidance, joint limits and camera field of view (FOV) limits for vision-based motion planning of a robot manipulator. A small number of user demonstrations are used to generate a feasible domain over which the whole robot arm can servo without violating joint limits or colliding with obstacles. Our algorithm utilizes these demonstrations to generate new feasible trajectories that keep the target in the camera’s FOV and achieve the desired view of the target (e.g., a pre-grasping location) in new, undemonstrated locations. To fulfill these requirements, a set of control points are selected within the feasible domain. Camera trajectories that traverse these control points are modeled and optimized using either quintic splines (for fast computation) or general polynomials (for better constraint satisfaction). Experiments with a seven degree of freedom articulated arm validate the proposed scheme.
doi_str_mv 10.1007/s10514-017-9667-4
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2260041592</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2260041592</sourcerecordid><originalsourceid>FETCH-LOGICAL-c316t-2dc3d679447b1ea80654583268355ea51703c224f78cdca96c8796b2ff9314163</originalsourceid><addsrcrecordid>eNp1kE1LxDAQhoMouK7-AG8Fz9HJd3OUxY-Fhb3oOaRpumRpm5q0gv56u1Tw5GmY4XnfgQehWwL3BEA9ZAKCcAxEYS2lwvwMrYhQDCtB1TlagaYaC6HZJbrK-QgAWgGs0HY_jKEL374uPkMOsceVzfOSYhXHoovjfCqG1vZ96A9Fk2JXdFM7hqH1Re272Ocx2ROUr9FFY9vsb37nGr0_P71tXvFu_7LdPO6wY0SOmNaO1VJpzlVFvC1BCi5KRmXJhPBWEAXMUcobVbraWS1dqbSsaNNoRjiRbI3ult4hxY_J59Ec45T6-aWhVAJwIjSdKbJQLsWck2_MkEJn05chYE7GzGLMzMbMyZjhc4YumTyz_cGnv-b_Qz_Ro23i</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2260041592</pqid></control><display><type>article</type><title>Optimized vision-based robot motion planning from multiple demonstrations</title><source>SpringerLink Journals - AutoHoldings</source><creator>Shen, Tiantian ; Radmard, Sina ; Chan, Ambrose ; Croft, Elizabeth A. ; Chesi, Graziano</creator><creatorcontrib>Shen, Tiantian ; Radmard, Sina ; Chan, Ambrose ; Croft, Elizabeth A. ; Chesi, Graziano</creatorcontrib><description>This paper combines workspace models with optimization techniques to simultaneously address whole-arm collision avoidance, joint limits and camera field of view (FOV) limits for vision-based motion planning of a robot manipulator. A small number of user demonstrations are used to generate a feasible domain over which the whole robot arm can servo without violating joint limits or colliding with obstacles. Our algorithm utilizes these demonstrations to generate new feasible trajectories that keep the target in the camera’s FOV and achieve the desired view of the target (e.g., a pre-grasping location) in new, undemonstrated locations. To fulfill these requirements, a set of control points are selected within the feasible domain. Camera trajectories that traverse these control points are modeled and optimized using either quintic splines (for fast computation) or general polynomials (for better constraint satisfaction). Experiments with a seven degree of freedom articulated arm validate the proposed scheme.</description><identifier>ISSN: 0929-5593</identifier><identifier>EISSN: 1573-7527</identifier><identifier>DOI: 10.1007/s10514-017-9667-4</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Algorithms ; Artificial Intelligence ; Cameras ; Collision avoidance ; Computer Imaging ; Control ; Engineering ; Field of view ; Grasping (robotics) ; Mechatronics ; Motion planning ; Optimization ; Optimization techniques ; Pattern Recognition and Graphics ; Polynomials ; Robot arms ; Robot dynamics ; Robotics ; Robotics and Automation ; Robots ; Splines ; Trajectory control ; Vision</subject><ispartof>Autonomous robots, 2018-08, Vol.42 (6), p.1117-1132</ispartof><rights>Springer Science+Business Media, LLC 2017</rights><rights>Autonomous Robots is a copyright of Springer, (2017). All Rights Reserved.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c316t-2dc3d679447b1ea80654583268355ea51703c224f78cdca96c8796b2ff9314163</citedby><cites>FETCH-LOGICAL-c316t-2dc3d679447b1ea80654583268355ea51703c224f78cdca96c8796b2ff9314163</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s10514-017-9667-4$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s10514-017-9667-4$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,41488,42557,51319</link.rule.ids></links><search><creatorcontrib>Shen, Tiantian</creatorcontrib><creatorcontrib>Radmard, Sina</creatorcontrib><creatorcontrib>Chan, Ambrose</creatorcontrib><creatorcontrib>Croft, Elizabeth A.</creatorcontrib><creatorcontrib>Chesi, Graziano</creatorcontrib><title>Optimized vision-based robot motion planning from multiple demonstrations</title><title>Autonomous robots</title><addtitle>Auton Robot</addtitle><description>This paper combines workspace models with optimization techniques to simultaneously address whole-arm collision avoidance, joint limits and camera field of view (FOV) limits for vision-based motion planning of a robot manipulator. A small number of user demonstrations are used to generate a feasible domain over which the whole robot arm can servo without violating joint limits or colliding with obstacles. Our algorithm utilizes these demonstrations to generate new feasible trajectories that keep the target in the camera’s FOV and achieve the desired view of the target (e.g., a pre-grasping location) in new, undemonstrated locations. To fulfill these requirements, a set of control points are selected within the feasible domain. Camera trajectories that traverse these control points are modeled and optimized using either quintic splines (for fast computation) or general polynomials (for better constraint satisfaction). Experiments with a seven degree of freedom articulated arm validate the proposed scheme.</description><subject>Algorithms</subject><subject>Artificial Intelligence</subject><subject>Cameras</subject><subject>Collision avoidance</subject><subject>Computer Imaging</subject><subject>Control</subject><subject>Engineering</subject><subject>Field of view</subject><subject>Grasping (robotics)</subject><subject>Mechatronics</subject><subject>Motion planning</subject><subject>Optimization</subject><subject>Optimization techniques</subject><subject>Pattern Recognition and Graphics</subject><subject>Polynomials</subject><subject>Robot arms</subject><subject>Robot dynamics</subject><subject>Robotics</subject><subject>Robotics and Automation</subject><subject>Robots</subject><subject>Splines</subject><subject>Trajectory control</subject><subject>Vision</subject><issn>0929-5593</issn><issn>1573-7527</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><sourceid>AFKRA</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNp1kE1LxDAQhoMouK7-AG8Fz9HJd3OUxY-Fhb3oOaRpumRpm5q0gv56u1Tw5GmY4XnfgQehWwL3BEA9ZAKCcAxEYS2lwvwMrYhQDCtB1TlagaYaC6HZJbrK-QgAWgGs0HY_jKEL374uPkMOsceVzfOSYhXHoovjfCqG1vZ96A9Fk2JXdFM7hqH1Re272Ocx2ROUr9FFY9vsb37nGr0_P71tXvFu_7LdPO6wY0SOmNaO1VJpzlVFvC1BCi5KRmXJhPBWEAXMUcobVbraWS1dqbSsaNNoRjiRbI3ult4hxY_J59Ec45T6-aWhVAJwIjSdKbJQLsWck2_MkEJn05chYE7GzGLMzMbMyZjhc4YumTyz_cGnv-b_Qz_Ro23i</recordid><startdate>20180801</startdate><enddate>20180801</enddate><creator>Shen, Tiantian</creator><creator>Radmard, Sina</creator><creator>Chan, Ambrose</creator><creator>Croft, Elizabeth A.</creator><creator>Chesi, Graziano</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7TB</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>F28</scope><scope>FR3</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>L6V</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M7S</scope><scope>P5Z</scope><scope>P62</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>S0W</scope></search><sort><creationdate>20180801</creationdate><title>Optimized vision-based robot motion planning from multiple demonstrations</title><author>Shen, Tiantian ; Radmard, Sina ; Chan, Ambrose ; Croft, Elizabeth A. ; Chesi, Graziano</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c316t-2dc3d679447b1ea80654583268355ea51703c224f78cdca96c8796b2ff9314163</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Algorithms</topic><topic>Artificial Intelligence</topic><topic>Cameras</topic><topic>Collision avoidance</topic><topic>Computer Imaging</topic><topic>Control</topic><topic>Engineering</topic><topic>Field of view</topic><topic>Grasping (robotics)</topic><topic>Mechatronics</topic><topic>Motion planning</topic><topic>Optimization</topic><topic>Optimization techniques</topic><topic>Pattern Recognition and Graphics</topic><topic>Polynomials</topic><topic>Robot arms</topic><topic>Robot dynamics</topic><topic>Robotics</topic><topic>Robotics and Automation</topic><topic>Robots</topic><topic>Splines</topic><topic>Trajectory control</topic><topic>Vision</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Shen, Tiantian</creatorcontrib><creatorcontrib>Radmard, Sina</creatorcontrib><creatorcontrib>Chan, Ambrose</creatorcontrib><creatorcontrib>Croft, Elizabeth A.</creatorcontrib><creatorcontrib>Chesi, Graziano</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Mechanical &amp; Transportation Engineering Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>ANTE: Abstracts in New Technology &amp; Engineering</collection><collection>Engineering Research Database</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Engineering Database</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>DELNET Engineering &amp; Technology Collection</collection><jtitle>Autonomous robots</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Shen, Tiantian</au><au>Radmard, Sina</au><au>Chan, Ambrose</au><au>Croft, Elizabeth A.</au><au>Chesi, Graziano</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Optimized vision-based robot motion planning from multiple demonstrations</atitle><jtitle>Autonomous robots</jtitle><stitle>Auton Robot</stitle><date>2018-08-01</date><risdate>2018</risdate><volume>42</volume><issue>6</issue><spage>1117</spage><epage>1132</epage><pages>1117-1132</pages><issn>0929-5593</issn><eissn>1573-7527</eissn><abstract>This paper combines workspace models with optimization techniques to simultaneously address whole-arm collision avoidance, joint limits and camera field of view (FOV) limits for vision-based motion planning of a robot manipulator. A small number of user demonstrations are used to generate a feasible domain over which the whole robot arm can servo without violating joint limits or colliding with obstacles. Our algorithm utilizes these demonstrations to generate new feasible trajectories that keep the target in the camera’s FOV and achieve the desired view of the target (e.g., a pre-grasping location) in new, undemonstrated locations. To fulfill these requirements, a set of control points are selected within the feasible domain. Camera trajectories that traverse these control points are modeled and optimized using either quintic splines (for fast computation) or general polynomials (for better constraint satisfaction). Experiments with a seven degree of freedom articulated arm validate the proposed scheme.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s10514-017-9667-4</doi><tpages>16</tpages></addata></record>
fulltext fulltext
identifier ISSN: 0929-5593
ispartof Autonomous robots, 2018-08, Vol.42 (6), p.1117-1132
issn 0929-5593
1573-7527
language eng
recordid cdi_proquest_journals_2260041592
source SpringerLink Journals - AutoHoldings
subjects Algorithms
Artificial Intelligence
Cameras
Collision avoidance
Computer Imaging
Control
Engineering
Field of view
Grasping (robotics)
Mechatronics
Motion planning
Optimization
Optimization techniques
Pattern Recognition and Graphics
Polynomials
Robot arms
Robot dynamics
Robotics
Robotics and Automation
Robots
Splines
Trajectory control
Vision
title Optimized vision-based robot motion planning from multiple demonstrations
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T20%3A46%3A17IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Optimized%20vision-based%20robot%20motion%20planning%20from%20multiple%20demonstrations&rft.jtitle=Autonomous%20robots&rft.au=Shen,%20Tiantian&rft.date=2018-08-01&rft.volume=42&rft.issue=6&rft.spage=1117&rft.epage=1132&rft.pages=1117-1132&rft.issn=0929-5593&rft.eissn=1573-7527&rft_id=info:doi/10.1007/s10514-017-9667-4&rft_dat=%3Cproquest_cross%3E2260041592%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2260041592&rft_id=info:pmid/&rfr_iscdi=true