Dual-CBA: Improving Online Continual Learning via Dual Continual Bias Adaptors from a Bi-level Optimization Perspective

In online continual learning (CL), models trained on changing distributions easily forget previously learned knowledge and bias toward newly received tasks. To address this issue, we present Continual Bias Adaptor (CBA), a bi-level framework that augments the classification network to adapt to catas...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Wang, Quanziang, Wang, Renzhen, Wu, Yichen, Jia, Xixi, Zhou, Minghao, Meng, Deyu
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Wang, Quanziang
Wang, Renzhen
Wu, Yichen
Jia, Xixi
Zhou, Minghao
Meng, Deyu
description In online continual learning (CL), models trained on changing distributions easily forget previously learned knowledge and bias toward newly received tasks. To address this issue, we present Continual Bias Adaptor (CBA), a bi-level framework that augments the classification network to adapt to catastrophic distribution shifts during training, enabling the network to achieve a stable consolidation of all seen tasks. However, the CBA module adjusts distribution shifts in a class-specific manner, exacerbating the stability gap issue and, to some extent, fails to meet the need for continual testing in online CL. To mitigate this challenge, we further propose a novel class-agnostic CBA module that separately aggregates the posterior probabilities of classes from new and old tasks, and applies a stable adjustment to the resulting posterior probabilities. We combine the two kinds of CBA modules into a unified Dual-CBA module, which thus is capable of adapting to catastrophic distribution shifts and simultaneously meets the real-time testing requirements of online CL. Besides, we propose Incremental Batch Normalization (IBN), a tailored BN module to re-estimate its population statistics for alleviating the feature bias arising from the inner loop optimization problem of our bi-level framework. To validate the effectiveness of the proposed method, we theoretically provide some insights into how it mitigates catastrophic distribution shifts, and empirically demonstrate its superiority through extensive experiments based on four rehearsal-based baselines and three public continual learning benchmarks.
doi_str_mv 10.48550/arxiv.2408.13991
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2408_13991</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2408_13991</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2408_139913</originalsourceid><addsrcrecordid>eNqFjr0KwjAUhbM4iPoATt4XaG1tC61bWxUFQQf3ctFULqRJSGP8eXpbcXBzOnC-c-BjbBoGfpwmSTBH8yDnL-Ig9cMoy8Ihu69uKLyyyJewa7RRjuQVDlKQ5FAqaUl2HPYcjeyJI4T-8cMKwhbyC2qrTAu1UQ1gV3qCOy7goC019EJLSsKRm1bzsyXHx2xQo2j55JsjNtusT-XW-yhW2lCD5ln1qtVHNfq_eAPCFEpQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Dual-CBA: Improving Online Continual Learning via Dual Continual Bias Adaptors from a Bi-level Optimization Perspective</title><source>arXiv.org</source><creator>Wang, Quanziang ; Wang, Renzhen ; Wu, Yichen ; Jia, Xixi ; Zhou, Minghao ; Meng, Deyu</creator><creatorcontrib>Wang, Quanziang ; Wang, Renzhen ; Wu, Yichen ; Jia, Xixi ; Zhou, Minghao ; Meng, Deyu</creatorcontrib><description>In online continual learning (CL), models trained on changing distributions easily forget previously learned knowledge and bias toward newly received tasks. To address this issue, we present Continual Bias Adaptor (CBA), a bi-level framework that augments the classification network to adapt to catastrophic distribution shifts during training, enabling the network to achieve a stable consolidation of all seen tasks. However, the CBA module adjusts distribution shifts in a class-specific manner, exacerbating the stability gap issue and, to some extent, fails to meet the need for continual testing in online CL. To mitigate this challenge, we further propose a novel class-agnostic CBA module that separately aggregates the posterior probabilities of classes from new and old tasks, and applies a stable adjustment to the resulting posterior probabilities. We combine the two kinds of CBA modules into a unified Dual-CBA module, which thus is capable of adapting to catastrophic distribution shifts and simultaneously meets the real-time testing requirements of online CL. Besides, we propose Incremental Batch Normalization (IBN), a tailored BN module to re-estimate its population statistics for alleviating the feature bias arising from the inner loop optimization problem of our bi-level framework. To validate the effectiveness of the proposed method, we theoretically provide some insights into how it mitigates catastrophic distribution shifts, and empirically demonstrate its superiority through extensive experiments based on four rehearsal-based baselines and three public continual learning benchmarks.</description><identifier>DOI: 10.48550/arxiv.2408.13991</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Learning</subject><creationdate>2024-08</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,777,882</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2408.13991$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2408.13991$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Wang, Quanziang</creatorcontrib><creatorcontrib>Wang, Renzhen</creatorcontrib><creatorcontrib>Wu, Yichen</creatorcontrib><creatorcontrib>Jia, Xixi</creatorcontrib><creatorcontrib>Zhou, Minghao</creatorcontrib><creatorcontrib>Meng, Deyu</creatorcontrib><title>Dual-CBA: Improving Online Continual Learning via Dual Continual Bias Adaptors from a Bi-level Optimization Perspective</title><description>In online continual learning (CL), models trained on changing distributions easily forget previously learned knowledge and bias toward newly received tasks. To address this issue, we present Continual Bias Adaptor (CBA), a bi-level framework that augments the classification network to adapt to catastrophic distribution shifts during training, enabling the network to achieve a stable consolidation of all seen tasks. However, the CBA module adjusts distribution shifts in a class-specific manner, exacerbating the stability gap issue and, to some extent, fails to meet the need for continual testing in online CL. To mitigate this challenge, we further propose a novel class-agnostic CBA module that separately aggregates the posterior probabilities of classes from new and old tasks, and applies a stable adjustment to the resulting posterior probabilities. We combine the two kinds of CBA modules into a unified Dual-CBA module, which thus is capable of adapting to catastrophic distribution shifts and simultaneously meets the real-time testing requirements of online CL. Besides, we propose Incremental Batch Normalization (IBN), a tailored BN module to re-estimate its population statistics for alleviating the feature bias arising from the inner loop optimization problem of our bi-level framework. To validate the effectiveness of the proposed method, we theoretically provide some insights into how it mitigates catastrophic distribution shifts, and empirically demonstrate its superiority through extensive experiments based on four rehearsal-based baselines and three public continual learning benchmarks.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNqFjr0KwjAUhbM4iPoATt4XaG1tC61bWxUFQQf3ctFULqRJSGP8eXpbcXBzOnC-c-BjbBoGfpwmSTBH8yDnL-Ig9cMoy8Ihu69uKLyyyJewa7RRjuQVDlKQ5FAqaUl2HPYcjeyJI4T-8cMKwhbyC2qrTAu1UQ1gV3qCOy7goC019EJLSsKRm1bzsyXHx2xQo2j55JsjNtusT-XW-yhW2lCD5ln1qtVHNfq_eAPCFEpQ</recordid><startdate>20240825</startdate><enddate>20240825</enddate><creator>Wang, Quanziang</creator><creator>Wang, Renzhen</creator><creator>Wu, Yichen</creator><creator>Jia, Xixi</creator><creator>Zhou, Minghao</creator><creator>Meng, Deyu</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240825</creationdate><title>Dual-CBA: Improving Online Continual Learning via Dual Continual Bias Adaptors from a Bi-level Optimization Perspective</title><author>Wang, Quanziang ; Wang, Renzhen ; Wu, Yichen ; Jia, Xixi ; Zhou, Minghao ; Meng, Deyu</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2408_139913</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Wang, Quanziang</creatorcontrib><creatorcontrib>Wang, Renzhen</creatorcontrib><creatorcontrib>Wu, Yichen</creatorcontrib><creatorcontrib>Jia, Xixi</creatorcontrib><creatorcontrib>Zhou, Minghao</creatorcontrib><creatorcontrib>Meng, Deyu</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Wang, Quanziang</au><au>Wang, Renzhen</au><au>Wu, Yichen</au><au>Jia, Xixi</au><au>Zhou, Minghao</au><au>Meng, Deyu</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Dual-CBA: Improving Online Continual Learning via Dual Continual Bias Adaptors from a Bi-level Optimization Perspective</atitle><date>2024-08-25</date><risdate>2024</risdate><abstract>In online continual learning (CL), models trained on changing distributions easily forget previously learned knowledge and bias toward newly received tasks. To address this issue, we present Continual Bias Adaptor (CBA), a bi-level framework that augments the classification network to adapt to catastrophic distribution shifts during training, enabling the network to achieve a stable consolidation of all seen tasks. However, the CBA module adjusts distribution shifts in a class-specific manner, exacerbating the stability gap issue and, to some extent, fails to meet the need for continual testing in online CL. To mitigate this challenge, we further propose a novel class-agnostic CBA module that separately aggregates the posterior probabilities of classes from new and old tasks, and applies a stable adjustment to the resulting posterior probabilities. We combine the two kinds of CBA modules into a unified Dual-CBA module, which thus is capable of adapting to catastrophic distribution shifts and simultaneously meets the real-time testing requirements of online CL. Besides, we propose Incremental Batch Normalization (IBN), a tailored BN module to re-estimate its population statistics for alleviating the feature bias arising from the inner loop optimization problem of our bi-level framework. To validate the effectiveness of the proposed method, we theoretically provide some insights into how it mitigates catastrophic distribution shifts, and empirically demonstrate its superiority through extensive experiments based on four rehearsal-based baselines and three public continual learning benchmarks.</abstract><doi>10.48550/arxiv.2408.13991</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2408.13991
ispartof
issn
language eng
recordid cdi_arxiv_primary_2408_13991
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Learning
title Dual-CBA: Improving Online Continual Learning via Dual Continual Bias Adaptors from a Bi-level Optimization Perspective
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-17T13%3A53%3A15IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Dual-CBA:%20Improving%20Online%20Continual%20Learning%20via%20Dual%20Continual%20Bias%20Adaptors%20from%20a%20Bi-level%20Optimization%20Perspective&rft.au=Wang,%20Quanziang&rft.date=2024-08-25&rft_id=info:doi/10.48550/arxiv.2408.13991&rft_dat=%3Carxiv_GOX%3E2408_13991%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true