Should We Learn Contact-Rich Manipulation Policies from Sampling-Based Planners?

The tremendous success of behavior cloning (BC) in robotic manipulation has been largely confined to tasks where demonstrations can be effectively collected through human teleoperation. However, demonstrations for contact-rich manipulation tasks that require complex coordination of multiple contacts...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Zhu, Huaijiang, Zhao, Tong, Ni, Xinpei, Wang, Jiuguang, Fang, Kuan, Righetti, Ludovic, Pang, Tao
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Zhu, Huaijiang
Zhao, Tong
Ni, Xinpei
Wang, Jiuguang
Fang, Kuan
Righetti, Ludovic
Pang, Tao
description The tremendous success of behavior cloning (BC) in robotic manipulation has been largely confined to tasks where demonstrations can be effectively collected through human teleoperation. However, demonstrations for contact-rich manipulation tasks that require complex coordination of multiple contacts are difficult to collect due to the limitations of current teleoperation interfaces. We investigate how to leverage model-based planning and optimization to generate training data for contact-rich dexterous manipulation tasks. Our analysis reveals that popular sampling-based planners like rapidly exploring random tree (RRT), while efficient for motion planning, produce demonstrations with unfavorably high entropy. This motivates modifications to our data generation pipeline that prioritizes demonstration consistency while maintaining solution diversity. Combined with a diffusion-based goal-conditioned BC approach, our method enables effective policy learning and zero-shot transfer to hardware for two challenging contact-rich manipulation tasks.
doi_str_mv 10.48550/arxiv.2412.09743
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2412_09743</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2412_09743</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2412_097433</originalsourceid><addsrcrecordid>eNqFjrsKwkAQAK-xEPUDrNwfSMwTtRIMioVCMAHLsCQXs3DZC3eJ6N-Lwd5qmoEZIZa-50bbOPbWaF70dIPID1xvt4nCqUizRg-qgruEi0TDkGjuseydG5UNXJGpGxT2pBlSragkaaE2uoUM204RP5wDWllBqpBZGrufi0mNysrFjzOxOh3z5OyM7aIz1KJ5F9-HYnwI_xsfuks8Jg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Should We Learn Contact-Rich Manipulation Policies from Sampling-Based Planners?</title><source>arXiv.org</source><creator>Zhu, Huaijiang ; Zhao, Tong ; Ni, Xinpei ; Wang, Jiuguang ; Fang, Kuan ; Righetti, Ludovic ; Pang, Tao</creator><creatorcontrib>Zhu, Huaijiang ; Zhao, Tong ; Ni, Xinpei ; Wang, Jiuguang ; Fang, Kuan ; Righetti, Ludovic ; Pang, Tao</creatorcontrib><description>The tremendous success of behavior cloning (BC) in robotic manipulation has been largely confined to tasks where demonstrations can be effectively collected through human teleoperation. However, demonstrations for contact-rich manipulation tasks that require complex coordination of multiple contacts are difficult to collect due to the limitations of current teleoperation interfaces. We investigate how to leverage model-based planning and optimization to generate training data for contact-rich dexterous manipulation tasks. Our analysis reveals that popular sampling-based planners like rapidly exploring random tree (RRT), while efficient for motion planning, produce demonstrations with unfavorably high entropy. This motivates modifications to our data generation pipeline that prioritizes demonstration consistency while maintaining solution diversity. Combined with a diffusion-based goal-conditioned BC approach, our method enables effective policy learning and zero-shot transfer to hardware for two challenging contact-rich manipulation tasks.</description><identifier>DOI: 10.48550/arxiv.2412.09743</identifier><language>eng</language><subject>Computer Science - Robotics</subject><creationdate>2024-12</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2412.09743$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2412.09743$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Zhu, Huaijiang</creatorcontrib><creatorcontrib>Zhao, Tong</creatorcontrib><creatorcontrib>Ni, Xinpei</creatorcontrib><creatorcontrib>Wang, Jiuguang</creatorcontrib><creatorcontrib>Fang, Kuan</creatorcontrib><creatorcontrib>Righetti, Ludovic</creatorcontrib><creatorcontrib>Pang, Tao</creatorcontrib><title>Should We Learn Contact-Rich Manipulation Policies from Sampling-Based Planners?</title><description>The tremendous success of behavior cloning (BC) in robotic manipulation has been largely confined to tasks where demonstrations can be effectively collected through human teleoperation. However, demonstrations for contact-rich manipulation tasks that require complex coordination of multiple contacts are difficult to collect due to the limitations of current teleoperation interfaces. We investigate how to leverage model-based planning and optimization to generate training data for contact-rich dexterous manipulation tasks. Our analysis reveals that popular sampling-based planners like rapidly exploring random tree (RRT), while efficient for motion planning, produce demonstrations with unfavorably high entropy. This motivates modifications to our data generation pipeline that prioritizes demonstration consistency while maintaining solution diversity. Combined with a diffusion-based goal-conditioned BC approach, our method enables effective policy learning and zero-shot transfer to hardware for two challenging contact-rich manipulation tasks.</description><subject>Computer Science - Robotics</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNqFjrsKwkAQAK-xEPUDrNwfSMwTtRIMioVCMAHLsCQXs3DZC3eJ6N-Lwd5qmoEZIZa-50bbOPbWaF70dIPID1xvt4nCqUizRg-qgruEi0TDkGjuseydG5UNXJGpGxT2pBlSragkaaE2uoUM204RP5wDWllBqpBZGrufi0mNysrFjzOxOh3z5OyM7aIz1KJ5F9-HYnwI_xsfuks8Jg</recordid><startdate>20241212</startdate><enddate>20241212</enddate><creator>Zhu, Huaijiang</creator><creator>Zhao, Tong</creator><creator>Ni, Xinpei</creator><creator>Wang, Jiuguang</creator><creator>Fang, Kuan</creator><creator>Righetti, Ludovic</creator><creator>Pang, Tao</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20241212</creationdate><title>Should We Learn Contact-Rich Manipulation Policies from Sampling-Based Planners?</title><author>Zhu, Huaijiang ; Zhao, Tong ; Ni, Xinpei ; Wang, Jiuguang ; Fang, Kuan ; Righetti, Ludovic ; Pang, Tao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2412_097433</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Robotics</topic><toplevel>online_resources</toplevel><creatorcontrib>Zhu, Huaijiang</creatorcontrib><creatorcontrib>Zhao, Tong</creatorcontrib><creatorcontrib>Ni, Xinpei</creatorcontrib><creatorcontrib>Wang, Jiuguang</creatorcontrib><creatorcontrib>Fang, Kuan</creatorcontrib><creatorcontrib>Righetti, Ludovic</creatorcontrib><creatorcontrib>Pang, Tao</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhu, Huaijiang</au><au>Zhao, Tong</au><au>Ni, Xinpei</au><au>Wang, Jiuguang</au><au>Fang, Kuan</au><au>Righetti, Ludovic</au><au>Pang, Tao</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Should We Learn Contact-Rich Manipulation Policies from Sampling-Based Planners?</atitle><date>2024-12-12</date><risdate>2024</risdate><abstract>The tremendous success of behavior cloning (BC) in robotic manipulation has been largely confined to tasks where demonstrations can be effectively collected through human teleoperation. However, demonstrations for contact-rich manipulation tasks that require complex coordination of multiple contacts are difficult to collect due to the limitations of current teleoperation interfaces. We investigate how to leverage model-based planning and optimization to generate training data for contact-rich dexterous manipulation tasks. Our analysis reveals that popular sampling-based planners like rapidly exploring random tree (RRT), while efficient for motion planning, produce demonstrations with unfavorably high entropy. This motivates modifications to our data generation pipeline that prioritizes demonstration consistency while maintaining solution diversity. Combined with a diffusion-based goal-conditioned BC approach, our method enables effective policy learning and zero-shot transfer to hardware for two challenging contact-rich manipulation tasks.</abstract><doi>10.48550/arxiv.2412.09743</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2412.09743
ispartof
issn
language eng
recordid cdi_arxiv_primary_2412_09743
source arXiv.org
subjects Computer Science - Robotics
title Should We Learn Contact-Rich Manipulation Policies from Sampling-Based Planners?
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-22T06%3A03%3A28IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Should%20We%20Learn%20Contact-Rich%20Manipulation%20Policies%20from%20Sampling-Based%20Planners?&rft.au=Zhu,%20Huaijiang&rft.date=2024-12-12&rft_id=info:doi/10.48550/arxiv.2412.09743&rft_dat=%3Carxiv_GOX%3E2412_09743%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true