Prediction with Action: Visual Policy Learning via Joint Denoising Process
Diffusion models have demonstrated remarkable capabilities in image generation tasks, including image editing and video creation, representing a good understanding of the physical world. On the other line, diffusion models have also shown promise in robotic control tasks by denoising actions, known...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Diffusion models have demonstrated remarkable capabilities in image
generation tasks, including image editing and video creation, representing a
good understanding of the physical world. On the other line, diffusion models
have also shown promise in robotic control tasks by denoising actions, known as
diffusion policy. Although the diffusion generative model and diffusion policy
exhibit distinct capabilities--image prediction and robotic action,
respectively--they technically follow a similar denoising process. In robotic
tasks, the ability to predict future images and generate actions is highly
correlated since they share the same underlying dynamics of the physical world.
Building on this insight, we introduce PAD, a novel visual policy learning
framework that unifies image Prediction and robot Action within a joint
Denoising process. Specifically, PAD utilizes Diffusion Transformers (DiT) to
seamlessly integrate images and robot states, enabling the simultaneous
prediction of future images and robot actions. Additionally, PAD supports
co-training on both robotic demonstrations and large-scale video datasets and
can be easily extended to other robotic modalities, such as depth images. PAD
outperforms previous methods, achieving a significant 26.3% relative
improvement on the full Metaworld benchmark, by utilizing a single
text-conditioned visual policy within a data-efficient imitation learning
setting. Furthermore, PAD demonstrates superior generalization to unseen tasks
in real-world robot manipulation settings with 28.0% success rate increase
compared to the strongest baseline. Project page at
https://sites.google.com/view/pad-paper |
---|---|
DOI: | 10.48550/arxiv.2411.18179 |