Contact-rich SE(3)-Equivariant Robot Manipulation Task Learning via Geometric Impedance Control
This paper presents a differential geometric control approach that leverages SE(3) group invariance and equivariance to increase transferability in learning robot manipulation tasks that involve interaction with the environment. Specifically, we employ a control law and a learning representation fra...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2023-12 |
---|---|
Hauptverfasser: | , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Seo, Joohwan Nikhil Potu Surya Prakash Zhang, Xiang Wang, Changhao Choi, Jongeun Tomizuka, Masayoshi Horowitz, Roberto |
description | This paper presents a differential geometric control approach that leverages SE(3) group invariance and equivariance to increase transferability in learning robot manipulation tasks that involve interaction with the environment. Specifically, we employ a control law and a learning representation framework that remain invariant under arbitrary SE(3) transformations of the manipulation task definition. Furthermore, the control law and learning representation framework are shown to be SE(3) equivariant when represented relative to the spatial frame. The proposed approach is based on utilizing a recently presented geometric impedance control (GIC) combined with a learning variable impedance control framework, where the gain scheduling policy is trained in a supervised learning fashion from expert demonstrations. A geometrically consistent error vector (GCEV) is fed to a neural network to achieve a gain scheduling policy that remains invariant to arbitrary translation and rotations. A comparison of our proposed control and learning framework with a well-known Cartesian space learning impedance control, equipped with a Cartesian error vector-based gain scheduling policy, confirms the significantly superior learning transferability of our proposed approach. A hardware implementation on a peg-in-hole task is conducted to validate the learning transferability and feasibility of the proposed approach. |
doi_str_mv | 10.48550/arxiv.2308.14984 |
format | Article |
fullrecord | <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2308_14984</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2858804657</sourcerecordid><originalsourceid>FETCH-LOGICAL-a954-dc9f329f5d9eb3528c36f7d9bc538131eb608d067bc6a444c1341b59e96a59dc3</originalsourceid><addsrcrecordid>eNotkEtLAzEYRYMgWGp_gCsDbnQxY54zyVJKrYWKoN0P32Qymtom00ym6L-3D1d3czjcexG6oSQXSkryCPHH7XPGicqp0EpcoBHjnGZKMHaFJn2_JoSwomRS8hGqpsEnMCmLznzhj9k9f8hmu8HtITrwCb-HOiT8Ct51wwaSCx6voP_GSwvRO_-J9w7w3IatTQcDXmw724A3Fh-9MWyu0WULm95O_nOMVs-z1fQlW77NF9OnZQZaiqwxuuVMt7LRtuaSKcOLtmx0bSRXlFNbF0Q1pChrU4AQwlAuaC211QVI3Rg-Rrdn7Wl91UW3hfhbHV-oTi8ciLsz0cWwG2yfqnUYoj90qpiSShFRyJL_ASAeX40</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2858804657</pqid></control><display><type>article</type><title>Contact-rich SE(3)-Equivariant Robot Manipulation Task Learning via Geometric Impedance Control</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Seo, Joohwan ; Nikhil Potu Surya Prakash ; Zhang, Xiang ; Wang, Changhao ; Choi, Jongeun ; Tomizuka, Masayoshi ; Horowitz, Roberto</creator><creatorcontrib>Seo, Joohwan ; Nikhil Potu Surya Prakash ; Zhang, Xiang ; Wang, Changhao ; Choi, Jongeun ; Tomizuka, Masayoshi ; Horowitz, Roberto</creatorcontrib><description>This paper presents a differential geometric control approach that leverages SE(3) group invariance and equivariance to increase transferability in learning robot manipulation tasks that involve interaction with the environment. Specifically, we employ a control law and a learning representation framework that remain invariant under arbitrary SE(3) transformations of the manipulation task definition. Furthermore, the control law and learning representation framework are shown to be SE(3) equivariant when represented relative to the spatial frame. The proposed approach is based on utilizing a recently presented geometric impedance control (GIC) combined with a learning variable impedance control framework, where the gain scheduling policy is trained in a supervised learning fashion from expert demonstrations. A geometrically consistent error vector (GCEV) is fed to a neural network to achieve a gain scheduling policy that remains invariant to arbitrary translation and rotations. A comparison of our proposed control and learning framework with a well-known Cartesian space learning impedance control, equipped with a Cartesian error vector-based gain scheduling policy, confirms the significantly superior learning transferability of our proposed approach. A hardware implementation on a peg-in-hole task is conducted to validate the learning transferability and feasibility of the proposed approach.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2308.14984</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Cartesian coordinates ; Computer Science - Robotics ; Computer Science - Systems and Control ; Control theory ; Differential geometry ; Gain scheduling ; Impedance ; Invariance ; Invariants ; Neural networks ; Representations ; Robots ; Supervised learning</subject><ispartof>arXiv.org, 2023-12</ispartof><rights>2023. This work is published under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://creativecommons.org/licenses/by-nc-nd/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,781,785,886,27929</link.rule.ids><backlink>$$Uhttps://doi.org/10.48550/arXiv.2308.14984$$DView paper in arXiv$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.1109/LRA.2023.3346748$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink></links><search><creatorcontrib>Seo, Joohwan</creatorcontrib><creatorcontrib>Nikhil Potu Surya Prakash</creatorcontrib><creatorcontrib>Zhang, Xiang</creatorcontrib><creatorcontrib>Wang, Changhao</creatorcontrib><creatorcontrib>Choi, Jongeun</creatorcontrib><creatorcontrib>Tomizuka, Masayoshi</creatorcontrib><creatorcontrib>Horowitz, Roberto</creatorcontrib><title>Contact-rich SE(3)-Equivariant Robot Manipulation Task Learning via Geometric Impedance Control</title><title>arXiv.org</title><description>This paper presents a differential geometric control approach that leverages SE(3) group invariance and equivariance to increase transferability in learning robot manipulation tasks that involve interaction with the environment. Specifically, we employ a control law and a learning representation framework that remain invariant under arbitrary SE(3) transformations of the manipulation task definition. Furthermore, the control law and learning representation framework are shown to be SE(3) equivariant when represented relative to the spatial frame. The proposed approach is based on utilizing a recently presented geometric impedance control (GIC) combined with a learning variable impedance control framework, where the gain scheduling policy is trained in a supervised learning fashion from expert demonstrations. A geometrically consistent error vector (GCEV) is fed to a neural network to achieve a gain scheduling policy that remains invariant to arbitrary translation and rotations. A comparison of our proposed control and learning framework with a well-known Cartesian space learning impedance control, equipped with a Cartesian error vector-based gain scheduling policy, confirms the significantly superior learning transferability of our proposed approach. A hardware implementation on a peg-in-hole task is conducted to validate the learning transferability and feasibility of the proposed approach.</description><subject>Cartesian coordinates</subject><subject>Computer Science - Robotics</subject><subject>Computer Science - Systems and Control</subject><subject>Control theory</subject><subject>Differential geometry</subject><subject>Gain scheduling</subject><subject>Impedance</subject><subject>Invariance</subject><subject>Invariants</subject><subject>Neural networks</subject><subject>Representations</subject><subject>Robots</subject><subject>Supervised learning</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GOX</sourceid><recordid>eNotkEtLAzEYRYMgWGp_gCsDbnQxY54zyVJKrYWKoN0P32Qymtom00ym6L-3D1d3czjcexG6oSQXSkryCPHH7XPGicqp0EpcoBHjnGZKMHaFJn2_JoSwomRS8hGqpsEnMCmLznzhj9k9f8hmu8HtITrwCb-HOiT8Ct51wwaSCx6voP_GSwvRO_-J9w7w3IatTQcDXmw724A3Fh-9MWyu0WULm95O_nOMVs-z1fQlW77NF9OnZQZaiqwxuuVMt7LRtuaSKcOLtmx0bSRXlFNbF0Q1pChrU4AQwlAuaC211QVI3Rg-Rrdn7Wl91UW3hfhbHV-oTi8ciLsz0cWwG2yfqnUYoj90qpiSShFRyJL_ASAeX40</recordid><startdate>20231218</startdate><enddate>20231218</enddate><creator>Seo, Joohwan</creator><creator>Nikhil Potu Surya Prakash</creator><creator>Zhang, Xiang</creator><creator>Wang, Changhao</creator><creator>Choi, Jongeun</creator><creator>Tomizuka, Masayoshi</creator><creator>Horowitz, Roberto</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20231218</creationdate><title>Contact-rich SE(3)-Equivariant Robot Manipulation Task Learning via Geometric Impedance Control</title><author>Seo, Joohwan ; Nikhil Potu Surya Prakash ; Zhang, Xiang ; Wang, Changhao ; Choi, Jongeun ; Tomizuka, Masayoshi ; Horowitz, Roberto</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a954-dc9f329f5d9eb3528c36f7d9bc538131eb608d067bc6a444c1341b59e96a59dc3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Cartesian coordinates</topic><topic>Computer Science - Robotics</topic><topic>Computer Science - Systems and Control</topic><topic>Control theory</topic><topic>Differential geometry</topic><topic>Gain scheduling</topic><topic>Impedance</topic><topic>Invariance</topic><topic>Invariants</topic><topic>Neural networks</topic><topic>Representations</topic><topic>Robots</topic><topic>Supervised learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Seo, Joohwan</creatorcontrib><creatorcontrib>Nikhil Potu Surya Prakash</creatorcontrib><creatorcontrib>Zhang, Xiang</creatorcontrib><creatorcontrib>Wang, Changhao</creatorcontrib><creatorcontrib>Choi, Jongeun</creatorcontrib><creatorcontrib>Tomizuka, Masayoshi</creatorcontrib><creatorcontrib>Horowitz, Roberto</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Access via ProQuest (Open Access)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Seo, Joohwan</au><au>Nikhil Potu Surya Prakash</au><au>Zhang, Xiang</au><au>Wang, Changhao</au><au>Choi, Jongeun</au><au>Tomizuka, Masayoshi</au><au>Horowitz, Roberto</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Contact-rich SE(3)-Equivariant Robot Manipulation Task Learning via Geometric Impedance Control</atitle><jtitle>arXiv.org</jtitle><date>2023-12-18</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>This paper presents a differential geometric control approach that leverages SE(3) group invariance and equivariance to increase transferability in learning robot manipulation tasks that involve interaction with the environment. Specifically, we employ a control law and a learning representation framework that remain invariant under arbitrary SE(3) transformations of the manipulation task definition. Furthermore, the control law and learning representation framework are shown to be SE(3) equivariant when represented relative to the spatial frame. The proposed approach is based on utilizing a recently presented geometric impedance control (GIC) combined with a learning variable impedance control framework, where the gain scheduling policy is trained in a supervised learning fashion from expert demonstrations. A geometrically consistent error vector (GCEV) is fed to a neural network to achieve a gain scheduling policy that remains invariant to arbitrary translation and rotations. A comparison of our proposed control and learning framework with a well-known Cartesian space learning impedance control, equipped with a Cartesian error vector-based gain scheduling policy, confirms the significantly superior learning transferability of our proposed approach. A hardware implementation on a peg-in-hole task is conducted to validate the learning transferability and feasibility of the proposed approach.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2308.14984</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2023-12 |
issn | 2331-8422 |
language | eng |
recordid | cdi_arxiv_primary_2308_14984 |
source | arXiv.org; Free E- Journals |
subjects | Cartesian coordinates Computer Science - Robotics Computer Science - Systems and Control Control theory Differential geometry Gain scheduling Impedance Invariance Invariants Neural networks Representations Robots Supervised learning |
title | Contact-rich SE(3)-Equivariant Robot Manipulation Task Learning via Geometric Impedance Control |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-17T09%3A36%3A04IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Contact-rich%20SE(3)-Equivariant%20Robot%20Manipulation%20Task%20Learning%20via%20Geometric%20Impedance%20Control&rft.jtitle=arXiv.org&rft.au=Seo,%20Joohwan&rft.date=2023-12-18&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2308.14984&rft_dat=%3Cproquest_arxiv%3E2858804657%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2858804657&rft_id=info:pmid/&rfr_iscdi=true |