Motion planning from demonstrations and polynomial optimization for visual servoing applications

Vision feedback control techniques are desirable for a wide range of robotics applications due to their robustness to image noise and modeling errors. However in the case of a robot-mounted camera, they encounter difficulties when the camera traverses large displacements. This scenario necessitates...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Tiantian Shen, Radmard, Sina, Chan, Ambrose, Croft, Elizabeth A., Chesi, Graziano
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 583
container_issue
container_start_page 578
container_title
container_volume
creator Tiantian Shen
Radmard, Sina
Chan, Ambrose
Croft, Elizabeth A.
Chesi, Graziano
description Vision feedback control techniques are desirable for a wide range of robotics applications due to their robustness to image noise and modeling errors. However in the case of a robot-mounted camera, they encounter difficulties when the camera traverses large displacements. This scenario necessitates continuous visual target feedback during the robot motion, while simultaneously considering the robot's self- and external-constraints. Herein, we propose to combine workspace (Cartesian space) path-planning with robot teach-by-demonstration to address the visibility constraint, joint limits and "whole arm" collision avoidance for vision-based control of a robot manipulator. User demonstration data generates safe regions for robot motion with respect to joint limits and potential "whole arm" collisions. Our algorithm uses these safe regions to generate new feasible trajectories under a visibility constraint that achieves the desired view of the target (e.g., a pre-grasping location) in new, undemonstrated locations. Experiments with a 7-DOF articulated arm validate the proposed method.
doi_str_mv 10.1109/IROS.2013.6696409
format Conference Proceeding
fullrecord <record><control><sourceid>ieee_6IE</sourceid><recordid>TN_cdi_ieee_primary_6696409</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>6696409</ieee_id><sourcerecordid>6696409</sourcerecordid><originalsourceid>FETCH-LOGICAL-i175t-ffd5b65dd1332dacc17ed437e9d80f90582cea42a15a94e63da7c86f8c5372e43</originalsourceid><addsrcrecordid>eNo9kFlLAzEAhKMoWGt_gPiSP7Br7uNRikehUvB4rjGHRHaTJVkL9ddrbfFphvmYeRgALjFqMUb6evG0em4JwrQVQguG9BE4x0xIKihX6hhMCOa0QUqIk3_P1RmY1fqJEMJSSKLQBLw95jHmBIfOpBTTBwwl99D5Pqc6FrNjFZrk4JC7bcp9NB3Mwxj7-P0HYcgFbmL9-s2rL5u82zDD0EW7L1-A02C66mcHnYLXu9uX-UOzXN0v5jfLJmLJxyYEx98Fdw5TSpyxFkvvGJVeO4WCRlwR6w0jBnOjmRfUGWmVCMpyKolndAqu9rvRe78eSuxN2a4P59AfYYpZ0Q</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Motion planning from demonstrations and polynomial optimization for visual servoing applications</title><source>IEEE Electronic Library (IEL) Conference Proceedings</source><creator>Tiantian Shen ; Radmard, Sina ; Chan, Ambrose ; Croft, Elizabeth A. ; Chesi, Graziano</creator><creatorcontrib>Tiantian Shen ; Radmard, Sina ; Chan, Ambrose ; Croft, Elizabeth A. ; Chesi, Graziano</creatorcontrib><description>Vision feedback control techniques are desirable for a wide range of robotics applications due to their robustness to image noise and modeling errors. However in the case of a robot-mounted camera, they encounter difficulties when the camera traverses large displacements. This scenario necessitates continuous visual target feedback during the robot motion, while simultaneously considering the robot's self- and external-constraints. Herein, we propose to combine workspace (Cartesian space) path-planning with robot teach-by-demonstration to address the visibility constraint, joint limits and "whole arm" collision avoidance for vision-based control of a robot manipulator. User demonstration data generates safe regions for robot motion with respect to joint limits and potential "whole arm" collisions. Our algorithm uses these safe regions to generate new feasible trajectories under a visibility constraint that achieves the desired view of the target (e.g., a pre-grasping location) in new, undemonstrated locations. Experiments with a 7-DOF articulated arm validate the proposed method.</description><identifier>ISSN: 2153-0858</identifier><identifier>EISSN: 2153-0866</identifier><identifier>EISBN: 1467363588</identifier><identifier>EISBN: 9781467363587</identifier><identifier>DOI: 10.1109/IROS.2013.6696409</identifier><language>eng</language><publisher>IEEE</publisher><subject>Cameras ; Collision avoidance ; Joints ; Polynomials ; Robot vision systems ; Trajectory</subject><ispartof>2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2013, p.578-583</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/6696409$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,2058,27925,54920</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/6696409$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Tiantian Shen</creatorcontrib><creatorcontrib>Radmard, Sina</creatorcontrib><creatorcontrib>Chan, Ambrose</creatorcontrib><creatorcontrib>Croft, Elizabeth A.</creatorcontrib><creatorcontrib>Chesi, Graziano</creatorcontrib><title>Motion planning from demonstrations and polynomial optimization for visual servoing applications</title><title>2013 IEEE/RSJ International Conference on Intelligent Robots and Systems</title><addtitle>IROS</addtitle><description>Vision feedback control techniques are desirable for a wide range of robotics applications due to their robustness to image noise and modeling errors. However in the case of a robot-mounted camera, they encounter difficulties when the camera traverses large displacements. This scenario necessitates continuous visual target feedback during the robot motion, while simultaneously considering the robot's self- and external-constraints. Herein, we propose to combine workspace (Cartesian space) path-planning with robot teach-by-demonstration to address the visibility constraint, joint limits and "whole arm" collision avoidance for vision-based control of a robot manipulator. User demonstration data generates safe regions for robot motion with respect to joint limits and potential "whole arm" collisions. Our algorithm uses these safe regions to generate new feasible trajectories under a visibility constraint that achieves the desired view of the target (e.g., a pre-grasping location) in new, undemonstrated locations. Experiments with a 7-DOF articulated arm validate the proposed method.</description><subject>Cameras</subject><subject>Collision avoidance</subject><subject>Joints</subject><subject>Polynomials</subject><subject>Robot vision systems</subject><subject>Trajectory</subject><issn>2153-0858</issn><issn>2153-0866</issn><isbn>1467363588</isbn><isbn>9781467363587</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2013</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><sourceid>RIE</sourceid><recordid>eNo9kFlLAzEAhKMoWGt_gPiSP7Br7uNRikehUvB4rjGHRHaTJVkL9ddrbfFphvmYeRgALjFqMUb6evG0em4JwrQVQguG9BE4x0xIKihX6hhMCOa0QUqIk3_P1RmY1fqJEMJSSKLQBLw95jHmBIfOpBTTBwwl99D5Pqc6FrNjFZrk4JC7bcp9NB3Mwxj7-P0HYcgFbmL9-s2rL5u82zDD0EW7L1-A02C66mcHnYLXu9uX-UOzXN0v5jfLJmLJxyYEx98Fdw5TSpyxFkvvGJVeO4WCRlwR6w0jBnOjmRfUGWmVCMpyKolndAqu9rvRe78eSuxN2a4P59AfYYpZ0Q</recordid><startdate>201311</startdate><enddate>201311</enddate><creator>Tiantian Shen</creator><creator>Radmard, Sina</creator><creator>Chan, Ambrose</creator><creator>Croft, Elizabeth A.</creator><creator>Chesi, Graziano</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>201311</creationdate><title>Motion planning from demonstrations and polynomial optimization for visual servoing applications</title><author>Tiantian Shen ; Radmard, Sina ; Chan, Ambrose ; Croft, Elizabeth A. ; Chesi, Graziano</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i175t-ffd5b65dd1332dacc17ed437e9d80f90582cea42a15a94e63da7c86f8c5372e43</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2013</creationdate><topic>Cameras</topic><topic>Collision avoidance</topic><topic>Joints</topic><topic>Polynomials</topic><topic>Robot vision systems</topic><topic>Trajectory</topic><toplevel>online_resources</toplevel><creatorcontrib>Tiantian Shen</creatorcontrib><creatorcontrib>Radmard, Sina</creatorcontrib><creatorcontrib>Chan, Ambrose</creatorcontrib><creatorcontrib>Croft, Elizabeth A.</creatorcontrib><creatorcontrib>Chesi, Graziano</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Tiantian Shen</au><au>Radmard, Sina</au><au>Chan, Ambrose</au><au>Croft, Elizabeth A.</au><au>Chesi, Graziano</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Motion planning from demonstrations and polynomial optimization for visual servoing applications</atitle><btitle>2013 IEEE/RSJ International Conference on Intelligent Robots and Systems</btitle><stitle>IROS</stitle><date>2013-11</date><risdate>2013</risdate><spage>578</spage><epage>583</epage><pages>578-583</pages><issn>2153-0858</issn><eissn>2153-0866</eissn><eisbn>1467363588</eisbn><eisbn>9781467363587</eisbn><abstract>Vision feedback control techniques are desirable for a wide range of robotics applications due to their robustness to image noise and modeling errors. However in the case of a robot-mounted camera, they encounter difficulties when the camera traverses large displacements. This scenario necessitates continuous visual target feedback during the robot motion, while simultaneously considering the robot's self- and external-constraints. Herein, we propose to combine workspace (Cartesian space) path-planning with robot teach-by-demonstration to address the visibility constraint, joint limits and "whole arm" collision avoidance for vision-based control of a robot manipulator. User demonstration data generates safe regions for robot motion with respect to joint limits and potential "whole arm" collisions. Our algorithm uses these safe regions to generate new feasible trajectories under a visibility constraint that achieves the desired view of the target (e.g., a pre-grasping location) in new, undemonstrated locations. Experiments with a 7-DOF articulated arm validate the proposed method.</abstract><pub>IEEE</pub><doi>10.1109/IROS.2013.6696409</doi><tpages>6</tpages></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 2153-0858
ispartof 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2013, p.578-583
issn 2153-0858
2153-0866
language eng
recordid cdi_ieee_primary_6696409
source IEEE Electronic Library (IEL) Conference Proceedings
subjects Cameras
Collision avoidance
Joints
Polynomials
Robot vision systems
Trajectory
title Motion planning from demonstrations and polynomial optimization for visual servoing applications
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T21%3A18%3A40IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_6IE&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Motion%20planning%20from%20demonstrations%20and%20polynomial%20optimization%20for%20visual%20servoing%20applications&rft.btitle=2013%20IEEE/RSJ%20International%20Conference%20on%20Intelligent%20Robots%20and%20Systems&rft.au=Tiantian%20Shen&rft.date=2013-11&rft.spage=578&rft.epage=583&rft.pages=578-583&rft.issn=2153-0858&rft.eissn=2153-0866&rft_id=info:doi/10.1109/IROS.2013.6696409&rft_dat=%3Cieee_6IE%3E6696409%3C/ieee_6IE%3E%3Curl%3E%3C/url%3E&rft.eisbn=1467363588&rft.eisbn_list=9781467363587&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=6696409&rfr_iscdi=true