M2Diffuser: Diffusion-based Trajectory Optimization for Mobile Manipulation in 3D Scenes

Recent advances in diffusion models have opened new avenues for research into embodied AI agents and robotics. Despite significant achievements in complex robotic locomotion and skills, mobile manipulation-a capability that requires the coordination of navigation and manipulation-remains a challenge...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Yan, Sixu, Zhang, Zeyu, Han, Muzhi, Wang, Zaijin, Xie, Qi, Li, Zhitian, Li, Zhehan, Liu, Hangxin, Wang, Xinggang, Zhu, Song-Chun
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Yan, Sixu
Zhang, Zeyu
Han, Muzhi
Wang, Zaijin
Xie, Qi
Li, Zhitian
Li, Zhehan
Liu, Hangxin
Wang, Xinggang
Zhu, Song-Chun
description Recent advances in diffusion models have opened new avenues for research into embodied AI agents and robotics. Despite significant achievements in complex robotic locomotion and skills, mobile manipulation-a capability that requires the coordination of navigation and manipulation-remains a challenge for generative AI techniques. This is primarily due to the high-dimensional action space, extended motion trajectories, and interactions with the surrounding environment. In this paper, we introduce M2Diffuser, a diffusion-based, scene-conditioned generative model that directly generates coordinated and efficient whole-body motion trajectories for mobile manipulation based on robot-centric 3D scans. M2Diffuser first learns trajectory-level distributions from mobile manipulation trajectories provided by an expert planner. Crucially, it incorporates an optimization module that can flexibly accommodate physical constraints and task objectives, modeled as cost and energy functions, during the inference process. This enables the reduction of physical violations and execution errors at each denoising step in a fully differentiable manner. Through benchmarking on three types of mobile manipulation tasks across over 20 scenes, we demonstrate that M2Diffuser outperforms state-of-the-art neural planners and successfully transfers the generated trajectories to a real-world robot. Our evaluations underscore the potential of generative AI to enhance the generalization of traditional planning and learning-based robotic methods, while also highlighting the critical role of enforcing physical constraints for safe and robust execution.
doi_str_mv 10.48550/arxiv.2410.11402
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2410_11402</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2410_11402</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2410_114023</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMgEKGBqaGBhxMkT4GrlkpqWVFqcWWSlAWJn5ebpJicWpKQohRYlZqckl-UWVCv4FJZm5mVWJJUBZhbT8IgXf_KTMnFQF38S8zILSHIh4Zp6CsYtCcHJqXmoxDwNrWmJOcSovlOZmkHdzDXH20AW7Ib6gKDM3sagyHuSWeLBbjAmrAAD1cj8K</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>M2Diffuser: Diffusion-based Trajectory Optimization for Mobile Manipulation in 3D Scenes</title><source>arXiv.org</source><creator>Yan, Sixu ; Zhang, Zeyu ; Han, Muzhi ; Wang, Zaijin ; Xie, Qi ; Li, Zhitian ; Li, Zhehan ; Liu, Hangxin ; Wang, Xinggang ; Zhu, Song-Chun</creator><creatorcontrib>Yan, Sixu ; Zhang, Zeyu ; Han, Muzhi ; Wang, Zaijin ; Xie, Qi ; Li, Zhitian ; Li, Zhehan ; Liu, Hangxin ; Wang, Xinggang ; Zhu, Song-Chun</creatorcontrib><description>Recent advances in diffusion models have opened new avenues for research into embodied AI agents and robotics. Despite significant achievements in complex robotic locomotion and skills, mobile manipulation-a capability that requires the coordination of navigation and manipulation-remains a challenge for generative AI techniques. This is primarily due to the high-dimensional action space, extended motion trajectories, and interactions with the surrounding environment. In this paper, we introduce M2Diffuser, a diffusion-based, scene-conditioned generative model that directly generates coordinated and efficient whole-body motion trajectories for mobile manipulation based on robot-centric 3D scans. M2Diffuser first learns trajectory-level distributions from mobile manipulation trajectories provided by an expert planner. Crucially, it incorporates an optimization module that can flexibly accommodate physical constraints and task objectives, modeled as cost and energy functions, during the inference process. This enables the reduction of physical violations and execution errors at each denoising step in a fully differentiable manner. Through benchmarking on three types of mobile manipulation tasks across over 20 scenes, we demonstrate that M2Diffuser outperforms state-of-the-art neural planners and successfully transfers the generated trajectories to a real-world robot. Our evaluations underscore the potential of generative AI to enhance the generalization of traditional planning and learning-based robotic methods, while also highlighting the critical role of enforcing physical constraints for safe and robust execution.</description><identifier>DOI: 10.48550/arxiv.2410.11402</identifier><language>eng</language><subject>Computer Science - Robotics</subject><creationdate>2024-10</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2410.11402$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2410.11402$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Yan, Sixu</creatorcontrib><creatorcontrib>Zhang, Zeyu</creatorcontrib><creatorcontrib>Han, Muzhi</creatorcontrib><creatorcontrib>Wang, Zaijin</creatorcontrib><creatorcontrib>Xie, Qi</creatorcontrib><creatorcontrib>Li, Zhitian</creatorcontrib><creatorcontrib>Li, Zhehan</creatorcontrib><creatorcontrib>Liu, Hangxin</creatorcontrib><creatorcontrib>Wang, Xinggang</creatorcontrib><creatorcontrib>Zhu, Song-Chun</creatorcontrib><title>M2Diffuser: Diffusion-based Trajectory Optimization for Mobile Manipulation in 3D Scenes</title><description>Recent advances in diffusion models have opened new avenues for research into embodied AI agents and robotics. Despite significant achievements in complex robotic locomotion and skills, mobile manipulation-a capability that requires the coordination of navigation and manipulation-remains a challenge for generative AI techniques. This is primarily due to the high-dimensional action space, extended motion trajectories, and interactions with the surrounding environment. In this paper, we introduce M2Diffuser, a diffusion-based, scene-conditioned generative model that directly generates coordinated and efficient whole-body motion trajectories for mobile manipulation based on robot-centric 3D scans. M2Diffuser first learns trajectory-level distributions from mobile manipulation trajectories provided by an expert planner. Crucially, it incorporates an optimization module that can flexibly accommodate physical constraints and task objectives, modeled as cost and energy functions, during the inference process. This enables the reduction of physical violations and execution errors at each denoising step in a fully differentiable manner. Through benchmarking on three types of mobile manipulation tasks across over 20 scenes, we demonstrate that M2Diffuser outperforms state-of-the-art neural planners and successfully transfers the generated trajectories to a real-world robot. Our evaluations underscore the potential of generative AI to enhance the generalization of traditional planning and learning-based robotic methods, while also highlighting the critical role of enforcing physical constraints for safe and robust execution.</description><subject>Computer Science - Robotics</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMgEKGBqaGBhxMkT4GrlkpqWVFqcWWSlAWJn5ebpJicWpKQohRYlZqckl-UWVCv4FJZm5mVWJJUBZhbT8IgXf_KTMnFQF38S8zILSHIh4Zp6CsYtCcHJqXmoxDwNrWmJOcSovlOZmkHdzDXH20AW7Ib6gKDM3sagyHuSWeLBbjAmrAAD1cj8K</recordid><startdate>20241015</startdate><enddate>20241015</enddate><creator>Yan, Sixu</creator><creator>Zhang, Zeyu</creator><creator>Han, Muzhi</creator><creator>Wang, Zaijin</creator><creator>Xie, Qi</creator><creator>Li, Zhitian</creator><creator>Li, Zhehan</creator><creator>Liu, Hangxin</creator><creator>Wang, Xinggang</creator><creator>Zhu, Song-Chun</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20241015</creationdate><title>M2Diffuser: Diffusion-based Trajectory Optimization for Mobile Manipulation in 3D Scenes</title><author>Yan, Sixu ; Zhang, Zeyu ; Han, Muzhi ; Wang, Zaijin ; Xie, Qi ; Li, Zhitian ; Li, Zhehan ; Liu, Hangxin ; Wang, Xinggang ; Zhu, Song-Chun</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2410_114023</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Robotics</topic><toplevel>online_resources</toplevel><creatorcontrib>Yan, Sixu</creatorcontrib><creatorcontrib>Zhang, Zeyu</creatorcontrib><creatorcontrib>Han, Muzhi</creatorcontrib><creatorcontrib>Wang, Zaijin</creatorcontrib><creatorcontrib>Xie, Qi</creatorcontrib><creatorcontrib>Li, Zhitian</creatorcontrib><creatorcontrib>Li, Zhehan</creatorcontrib><creatorcontrib>Liu, Hangxin</creatorcontrib><creatorcontrib>Wang, Xinggang</creatorcontrib><creatorcontrib>Zhu, Song-Chun</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Yan, Sixu</au><au>Zhang, Zeyu</au><au>Han, Muzhi</au><au>Wang, Zaijin</au><au>Xie, Qi</au><au>Li, Zhitian</au><au>Li, Zhehan</au><au>Liu, Hangxin</au><au>Wang, Xinggang</au><au>Zhu, Song-Chun</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>M2Diffuser: Diffusion-based Trajectory Optimization for Mobile Manipulation in 3D Scenes</atitle><date>2024-10-15</date><risdate>2024</risdate><abstract>Recent advances in diffusion models have opened new avenues for research into embodied AI agents and robotics. Despite significant achievements in complex robotic locomotion and skills, mobile manipulation-a capability that requires the coordination of navigation and manipulation-remains a challenge for generative AI techniques. This is primarily due to the high-dimensional action space, extended motion trajectories, and interactions with the surrounding environment. In this paper, we introduce M2Diffuser, a diffusion-based, scene-conditioned generative model that directly generates coordinated and efficient whole-body motion trajectories for mobile manipulation based on robot-centric 3D scans. M2Diffuser first learns trajectory-level distributions from mobile manipulation trajectories provided by an expert planner. Crucially, it incorporates an optimization module that can flexibly accommodate physical constraints and task objectives, modeled as cost and energy functions, during the inference process. This enables the reduction of physical violations and execution errors at each denoising step in a fully differentiable manner. Through benchmarking on three types of mobile manipulation tasks across over 20 scenes, we demonstrate that M2Diffuser outperforms state-of-the-art neural planners and successfully transfers the generated trajectories to a real-world robot. Our evaluations underscore the potential of generative AI to enhance the generalization of traditional planning and learning-based robotic methods, while also highlighting the critical role of enforcing physical constraints for safe and robust execution.</abstract><doi>10.48550/arxiv.2410.11402</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2410.11402
ispartof
issn
language eng
recordid cdi_arxiv_primary_2410_11402
source arXiv.org
subjects Computer Science - Robotics
title M2Diffuser: Diffusion-based Trajectory Optimization for Mobile Manipulation in 3D Scenes
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-08T11%3A46%3A45IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=M2Diffuser:%20Diffusion-based%20Trajectory%20Optimization%20for%20Mobile%20Manipulation%20in%203D%20Scenes&rft.au=Yan,%20Sixu&rft.date=2024-10-15&rft_id=info:doi/10.48550/arxiv.2410.11402&rft_dat=%3Carxiv_GOX%3E2410_11402%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true