Continual Diffuser (CoD): Mastering Continual Offline Reinforcement Learning with Experience Rehearsal
Artificial neural networks, especially recent diffusion-based models, have shown remarkable superiority in gaming, control, and QA systems, where the training tasks' datasets are usually static. However, in real-world applications, such as robotic control of reinforcement learning (RL), the tas...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Hu, Jifeng Shen, Li Huang, Sili Yang, Zhejian Chen, Hechang Sun, Lichao Chang, Yi Tao, Dacheng |
description | Artificial neural networks, especially recent diffusion-based models, have
shown remarkable superiority in gaming, control, and QA systems, where the
training tasks' datasets are usually static. However, in real-world
applications, such as robotic control of reinforcement learning (RL), the tasks
are changing, and new tasks arise in a sequential order. This situation poses
the new challenge of plasticity-stability trade-off for training an agent who
can adapt to task changes and retain acquired knowledge. In view of this, we
propose a rehearsal-based continual diffusion model, called Continual Diffuser
(CoD), to endow the diffuser with the capabilities of quick adaptation
(plasticity) and lasting retention (stability). Specifically, we first
construct an offline benchmark that contains 90 tasks from multiple domains.
Then, we train the CoD on each task with sequential modeling and conditional
generation for making decisions. Next, we preserve a small portion of previous
datasets as the rehearsal buffer and replay it to retain the acquired
knowledge. Extensive experiments on a series of tasks show CoD can achieve a
promising plasticity-stability trade-off and outperform existing
diffusion-based methods and other representative baselines on most tasks. |
doi_str_mv | 10.48550/arxiv.2409.02512 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2409_02512</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2409_02512</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2409_025123</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjGw1DMwMjU04mRIc87PK8nMK03MUXDJTEsrLU4tUtBwznfRtFLwTSwuSS3KzEtXQKjxT0vLycxLVQhKzcxLyy9KTs1NzStR8ElNLMoDKSzPLMlQcK0oAGpLzUsGKcsAShUn5vAwsKYl5hSn8kJpbgZ5N9cQZw9dsIviC4oycxOLKuNBLosHu8yYsAoA7lZEgQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Continual Diffuser (CoD): Mastering Continual Offline Reinforcement Learning with Experience Rehearsal</title><source>arXiv.org</source><creator>Hu, Jifeng ; Shen, Li ; Huang, Sili ; Yang, Zhejian ; Chen, Hechang ; Sun, Lichao ; Chang, Yi ; Tao, Dacheng</creator><creatorcontrib>Hu, Jifeng ; Shen, Li ; Huang, Sili ; Yang, Zhejian ; Chen, Hechang ; Sun, Lichao ; Chang, Yi ; Tao, Dacheng</creatorcontrib><description>Artificial neural networks, especially recent diffusion-based models, have
shown remarkable superiority in gaming, control, and QA systems, where the
training tasks' datasets are usually static. However, in real-world
applications, such as robotic control of reinforcement learning (RL), the tasks
are changing, and new tasks arise in a sequential order. This situation poses
the new challenge of plasticity-stability trade-off for training an agent who
can adapt to task changes and retain acquired knowledge. In view of this, we
propose a rehearsal-based continual diffusion model, called Continual Diffuser
(CoD), to endow the diffuser with the capabilities of quick adaptation
(plasticity) and lasting retention (stability). Specifically, we first
construct an offline benchmark that contains 90 tasks from multiple domains.
Then, we train the CoD on each task with sequential modeling and conditional
generation for making decisions. Next, we preserve a small portion of previous
datasets as the rehearsal buffer and replay it to retain the acquired
knowledge. Extensive experiments on a series of tasks show CoD can achieve a
promising plasticity-stability trade-off and outperform existing
diffusion-based methods and other representative baselines on most tasks.</description><identifier>DOI: 10.48550/arxiv.2409.02512</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Learning</subject><creationdate>2024-09</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,781,886</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2409.02512$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2409.02512$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Hu, Jifeng</creatorcontrib><creatorcontrib>Shen, Li</creatorcontrib><creatorcontrib>Huang, Sili</creatorcontrib><creatorcontrib>Yang, Zhejian</creatorcontrib><creatorcontrib>Chen, Hechang</creatorcontrib><creatorcontrib>Sun, Lichao</creatorcontrib><creatorcontrib>Chang, Yi</creatorcontrib><creatorcontrib>Tao, Dacheng</creatorcontrib><title>Continual Diffuser (CoD): Mastering Continual Offline Reinforcement Learning with Experience Rehearsal</title><description>Artificial neural networks, especially recent diffusion-based models, have
shown remarkable superiority in gaming, control, and QA systems, where the
training tasks' datasets are usually static. However, in real-world
applications, such as robotic control of reinforcement learning (RL), the tasks
are changing, and new tasks arise in a sequential order. This situation poses
the new challenge of plasticity-stability trade-off for training an agent who
can adapt to task changes and retain acquired knowledge. In view of this, we
propose a rehearsal-based continual diffusion model, called Continual Diffuser
(CoD), to endow the diffuser with the capabilities of quick adaptation
(plasticity) and lasting retention (stability). Specifically, we first
construct an offline benchmark that contains 90 tasks from multiple domains.
Then, we train the CoD on each task with sequential modeling and conditional
generation for making decisions. Next, we preserve a small portion of previous
datasets as the rehearsal buffer and replay it to retain the acquired
knowledge. Extensive experiments on a series of tasks show CoD can achieve a
promising plasticity-stability trade-off and outperform existing
diffusion-based methods and other representative baselines on most tasks.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjGw1DMwMjU04mRIc87PK8nMK03MUXDJTEsrLU4tUtBwznfRtFLwTSwuSS3KzEtXQKjxT0vLycxLVQhKzcxLyy9KTs1NzStR8ElNLMoDKSzPLMlQcK0oAGpLzUsGKcsAShUn5vAwsKYl5hSn8kJpbgZ5N9cQZw9dsIviC4oycxOLKuNBLosHu8yYsAoA7lZEgQ</recordid><startdate>20240904</startdate><enddate>20240904</enddate><creator>Hu, Jifeng</creator><creator>Shen, Li</creator><creator>Huang, Sili</creator><creator>Yang, Zhejian</creator><creator>Chen, Hechang</creator><creator>Sun, Lichao</creator><creator>Chang, Yi</creator><creator>Tao, Dacheng</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240904</creationdate><title>Continual Diffuser (CoD): Mastering Continual Offline Reinforcement Learning with Experience Rehearsal</title><author>Hu, Jifeng ; Shen, Li ; Huang, Sili ; Yang, Zhejian ; Chen, Hechang ; Sun, Lichao ; Chang, Yi ; Tao, Dacheng</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2409_025123</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Hu, Jifeng</creatorcontrib><creatorcontrib>Shen, Li</creatorcontrib><creatorcontrib>Huang, Sili</creatorcontrib><creatorcontrib>Yang, Zhejian</creatorcontrib><creatorcontrib>Chen, Hechang</creatorcontrib><creatorcontrib>Sun, Lichao</creatorcontrib><creatorcontrib>Chang, Yi</creatorcontrib><creatorcontrib>Tao, Dacheng</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Hu, Jifeng</au><au>Shen, Li</au><au>Huang, Sili</au><au>Yang, Zhejian</au><au>Chen, Hechang</au><au>Sun, Lichao</au><au>Chang, Yi</au><au>Tao, Dacheng</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Continual Diffuser (CoD): Mastering Continual Offline Reinforcement Learning with Experience Rehearsal</atitle><date>2024-09-04</date><risdate>2024</risdate><abstract>Artificial neural networks, especially recent diffusion-based models, have
shown remarkable superiority in gaming, control, and QA systems, where the
training tasks' datasets are usually static. However, in real-world
applications, such as robotic control of reinforcement learning (RL), the tasks
are changing, and new tasks arise in a sequential order. This situation poses
the new challenge of plasticity-stability trade-off for training an agent who
can adapt to task changes and retain acquired knowledge. In view of this, we
propose a rehearsal-based continual diffusion model, called Continual Diffuser
(CoD), to endow the diffuser with the capabilities of quick adaptation
(plasticity) and lasting retention (stability). Specifically, we first
construct an offline benchmark that contains 90 tasks from multiple domains.
Then, we train the CoD on each task with sequential modeling and conditional
generation for making decisions. Next, we preserve a small portion of previous
datasets as the rehearsal buffer and replay it to retain the acquired
knowledge. Extensive experiments on a series of tasks show CoD can achieve a
promising plasticity-stability trade-off and outperform existing
diffusion-based methods and other representative baselines on most tasks.</abstract><doi>10.48550/arxiv.2409.02512</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2409.02512 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2409_02512 |
source | arXiv.org |
subjects | Computer Science - Artificial Intelligence Computer Science - Learning |
title | Continual Diffuser (CoD): Mastering Continual Offline Reinforcement Learning with Experience Rehearsal |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-16T20%3A51%3A53IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Continual%20Diffuser%20(CoD):%20Mastering%20Continual%20Offline%20Reinforcement%20Learning%20with%20Experience%20Rehearsal&rft.au=Hu,%20Jifeng&rft.date=2024-09-04&rft_id=info:doi/10.48550/arxiv.2409.02512&rft_dat=%3Carxiv_GOX%3E2409_02512%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |