Prediction with Action: Visual Policy Learning via Joint Denoising Process
Diffusion models have demonstrated remarkable capabilities in image generation tasks, including image editing and video creation, representing a good understanding of the physical world. On the other line, diffusion models have also shown promise in robotic control tasks by denoising actions, known...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Guo, Yanjiang Hu, Yucheng Zhang, Jianke Wang, Yen-Jen Chen, Xiaoyu Lu, Chaochao Chen, Jianyu |
description | Diffusion models have demonstrated remarkable capabilities in image
generation tasks, including image editing and video creation, representing a
good understanding of the physical world. On the other line, diffusion models
have also shown promise in robotic control tasks by denoising actions, known as
diffusion policy. Although the diffusion generative model and diffusion policy
exhibit distinct capabilities--image prediction and robotic action,
respectively--they technically follow a similar denoising process. In robotic
tasks, the ability to predict future images and generate actions is highly
correlated since they share the same underlying dynamics of the physical world.
Building on this insight, we introduce PAD, a novel visual policy learning
framework that unifies image Prediction and robot Action within a joint
Denoising process. Specifically, PAD utilizes Diffusion Transformers (DiT) to
seamlessly integrate images and robot states, enabling the simultaneous
prediction of future images and robot actions. Additionally, PAD supports
co-training on both robotic demonstrations and large-scale video datasets and
can be easily extended to other robotic modalities, such as depth images. PAD
outperforms previous methods, achieving a significant 26.3% relative
improvement on the full Metaworld benchmark, by utilizing a single
text-conditioned visual policy within a data-efficient imitation learning
setting. Furthermore, PAD demonstrates superior generalization to unseen tasks
in real-world robot manipulation settings with 28.0% success rate increase
compared to the strongest baseline. Project page at
https://sites.google.com/view/pad-paper |
doi_str_mv | 10.48550/arxiv.2411.18179 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2411_18179</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2411_18179</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2411_181793</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjE01DO0MDS35GTwCihKTclMLsnMz1MozyzJUHAEs60UwjKLSxNzFALyczKTKxV8UhOL8jLz0hXKMhMVvPIz80oUXFLz8jOLQWIBRfnJqcXFPAysaYk5xam8UJqbQd7NNcTZQxdsa3xBUWZuYlFlPMj2eLDtxoRVAABZ-zpA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Prediction with Action: Visual Policy Learning via Joint Denoising Process</title><source>arXiv.org</source><creator>Guo, Yanjiang ; Hu, Yucheng ; Zhang, Jianke ; Wang, Yen-Jen ; Chen, Xiaoyu ; Lu, Chaochao ; Chen, Jianyu</creator><creatorcontrib>Guo, Yanjiang ; Hu, Yucheng ; Zhang, Jianke ; Wang, Yen-Jen ; Chen, Xiaoyu ; Lu, Chaochao ; Chen, Jianyu</creatorcontrib><description>Diffusion models have demonstrated remarkable capabilities in image
generation tasks, including image editing and video creation, representing a
good understanding of the physical world. On the other line, diffusion models
have also shown promise in robotic control tasks by denoising actions, known as
diffusion policy. Although the diffusion generative model and diffusion policy
exhibit distinct capabilities--image prediction and robotic action,
respectively--they technically follow a similar denoising process. In robotic
tasks, the ability to predict future images and generate actions is highly
correlated since they share the same underlying dynamics of the physical world.
Building on this insight, we introduce PAD, a novel visual policy learning
framework that unifies image Prediction and robot Action within a joint
Denoising process. Specifically, PAD utilizes Diffusion Transformers (DiT) to
seamlessly integrate images and robot states, enabling the simultaneous
prediction of future images and robot actions. Additionally, PAD supports
co-training on both robotic demonstrations and large-scale video datasets and
can be easily extended to other robotic modalities, such as depth images. PAD
outperforms previous methods, achieving a significant 26.3% relative
improvement on the full Metaworld benchmark, by utilizing a single
text-conditioned visual policy within a data-efficient imitation learning
setting. Furthermore, PAD demonstrates superior generalization to unseen tasks
in real-world robot manipulation settings with 28.0% success rate increase
compared to the strongest baseline. Project page at
https://sites.google.com/view/pad-paper</description><identifier>DOI: 10.48550/arxiv.2411.18179</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Robotics</subject><creationdate>2024-11</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2411.18179$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2411.18179$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Guo, Yanjiang</creatorcontrib><creatorcontrib>Hu, Yucheng</creatorcontrib><creatorcontrib>Zhang, Jianke</creatorcontrib><creatorcontrib>Wang, Yen-Jen</creatorcontrib><creatorcontrib>Chen, Xiaoyu</creatorcontrib><creatorcontrib>Lu, Chaochao</creatorcontrib><creatorcontrib>Chen, Jianyu</creatorcontrib><title>Prediction with Action: Visual Policy Learning via Joint Denoising Process</title><description>Diffusion models have demonstrated remarkable capabilities in image
generation tasks, including image editing and video creation, representing a
good understanding of the physical world. On the other line, diffusion models
have also shown promise in robotic control tasks by denoising actions, known as
diffusion policy. Although the diffusion generative model and diffusion policy
exhibit distinct capabilities--image prediction and robotic action,
respectively--they technically follow a similar denoising process. In robotic
tasks, the ability to predict future images and generate actions is highly
correlated since they share the same underlying dynamics of the physical world.
Building on this insight, we introduce PAD, a novel visual policy learning
framework that unifies image Prediction and robot Action within a joint
Denoising process. Specifically, PAD utilizes Diffusion Transformers (DiT) to
seamlessly integrate images and robot states, enabling the simultaneous
prediction of future images and robot actions. Additionally, PAD supports
co-training on both robotic demonstrations and large-scale video datasets and
can be easily extended to other robotic modalities, such as depth images. PAD
outperforms previous methods, achieving a significant 26.3% relative
improvement on the full Metaworld benchmark, by utilizing a single
text-conditioned visual policy within a data-efficient imitation learning
setting. Furthermore, PAD demonstrates superior generalization to unseen tasks
in real-world robot manipulation settings with 28.0% success rate increase
compared to the strongest baseline. Project page at
https://sites.google.com/view/pad-paper</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Robotics</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjE01DO0MDS35GTwCihKTclMLsnMz1MozyzJUHAEs60UwjKLSxNzFALyczKTKxV8UhOL8jLz0hXKMhMVvPIz80oUXFLz8jOLQWIBRfnJqcXFPAysaYk5xam8UJqbQd7NNcTZQxdsa3xBUWZuYlFlPMj2eLDtxoRVAABZ-zpA</recordid><startdate>20241127</startdate><enddate>20241127</enddate><creator>Guo, Yanjiang</creator><creator>Hu, Yucheng</creator><creator>Zhang, Jianke</creator><creator>Wang, Yen-Jen</creator><creator>Chen, Xiaoyu</creator><creator>Lu, Chaochao</creator><creator>Chen, Jianyu</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20241127</creationdate><title>Prediction with Action: Visual Policy Learning via Joint Denoising Process</title><author>Guo, Yanjiang ; Hu, Yucheng ; Zhang, Jianke ; Wang, Yen-Jen ; Chen, Xiaoyu ; Lu, Chaochao ; Chen, Jianyu</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2411_181793</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Robotics</topic><toplevel>online_resources</toplevel><creatorcontrib>Guo, Yanjiang</creatorcontrib><creatorcontrib>Hu, Yucheng</creatorcontrib><creatorcontrib>Zhang, Jianke</creatorcontrib><creatorcontrib>Wang, Yen-Jen</creatorcontrib><creatorcontrib>Chen, Xiaoyu</creatorcontrib><creatorcontrib>Lu, Chaochao</creatorcontrib><creatorcontrib>Chen, Jianyu</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Guo, Yanjiang</au><au>Hu, Yucheng</au><au>Zhang, Jianke</au><au>Wang, Yen-Jen</au><au>Chen, Xiaoyu</au><au>Lu, Chaochao</au><au>Chen, Jianyu</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Prediction with Action: Visual Policy Learning via Joint Denoising Process</atitle><date>2024-11-27</date><risdate>2024</risdate><abstract>Diffusion models have demonstrated remarkable capabilities in image
generation tasks, including image editing and video creation, representing a
good understanding of the physical world. On the other line, diffusion models
have also shown promise in robotic control tasks by denoising actions, known as
diffusion policy. Although the diffusion generative model and diffusion policy
exhibit distinct capabilities--image prediction and robotic action,
respectively--they technically follow a similar denoising process. In robotic
tasks, the ability to predict future images and generate actions is highly
correlated since they share the same underlying dynamics of the physical world.
Building on this insight, we introduce PAD, a novel visual policy learning
framework that unifies image Prediction and robot Action within a joint
Denoising process. Specifically, PAD utilizes Diffusion Transformers (DiT) to
seamlessly integrate images and robot states, enabling the simultaneous
prediction of future images and robot actions. Additionally, PAD supports
co-training on both robotic demonstrations and large-scale video datasets and
can be easily extended to other robotic modalities, such as depth images. PAD
outperforms previous methods, achieving a significant 26.3% relative
improvement on the full Metaworld benchmark, by utilizing a single
text-conditioned visual policy within a data-efficient imitation learning
setting. Furthermore, PAD demonstrates superior generalization to unseen tasks
in real-world robot manipulation settings with 28.0% success rate increase
compared to the strongest baseline. Project page at
https://sites.google.com/view/pad-paper</abstract><doi>10.48550/arxiv.2411.18179</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2411.18179 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2411_18179 |
source | arXiv.org |
subjects | Computer Science - Artificial Intelligence Computer Science - Robotics |
title | Prediction with Action: Visual Policy Learning via Joint Denoising Process |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-12T19%3A34%3A45IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Prediction%20with%20Action:%20Visual%20Policy%20Learning%20via%20Joint%20Denoising%20Process&rft.au=Guo,%20Yanjiang&rft.date=2024-11-27&rft_id=info:doi/10.48550/arxiv.2411.18179&rft_dat=%3Carxiv_GOX%3E2411_18179%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |