Balancing the Causal Effects in Class-Incremental Learning

Class-Incremental Learning (CIL) is a practical and challenging problem for achieving general artificial intelligence. Recently, Pre-Trained Models (PTMs) have led to breakthroughs in both visual and natural language processing tasks. Despite recent studies showing PTMs' potential ability to le...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Zheng, Junhao, Wang, Ruiyan, Zhang, Chongzhi, Feng, Huawen, Ma, Qianli
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Zheng, Junhao
Wang, Ruiyan
Zhang, Chongzhi
Feng, Huawen
Ma, Qianli
description Class-Incremental Learning (CIL) is a practical and challenging problem for achieving general artificial intelligence. Recently, Pre-Trained Models (PTMs) have led to breakthroughs in both visual and natural language processing tasks. Despite recent studies showing PTMs' potential ability to learn sequentially, a plethora of work indicates the necessity of alleviating the catastrophic forgetting of PTMs. Through a pilot study and a causal analysis of CIL, we reveal that the crux lies in the imbalanced causal effects between new and old data. Specifically, the new data encourage models to adapt to new classes while hindering the adaptation of old classes. Similarly, the old data encourages models to adapt to old classes while hindering the adaptation of new classes. In other words, the adaptation process between new and old classes conflicts from the causal perspective. To alleviate this problem, we propose Balancing the Causal Effects (BaCE) in CIL. Concretely, BaCE proposes two objectives for building causal paths from both new and old data to the prediction of new and classes, respectively. In this way, the model is encouraged to adapt to all classes with causal effects from both new and old data and thus alleviates the causal imbalance problem. We conduct extensive experiments on continual image classification, continual text classification, and continual named entity recognition. Empirical results show that BaCE outperforms a series of CIL methods on different tasks and settings.
doi_str_mv 10.48550/arxiv.2402.10063
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2402_10063</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2402_10063</sourcerecordid><originalsourceid>FETCH-LOGICAL-a673-4b2c11e3f3645e69d116e69dba215647102739c5b2b55dc817f945cd392f57d83</originalsourceid><addsrcrecordid>eNotj7FOwzAURb0woNIPYKp_IMHP9rMTtjYqtFKkLt2jF8eGSKmF7IDo39OWTme4ulf3MPYMotQVonih9Dv-lFILWYIQRj2y1w1NFN0YP_j86XlD35kmvg3BuznzMfJmopyLfXTJn3ycL2HrKcVL4Yk9BJqyX965YMe37bHZFe3hfd-s24KMVYXupQPwKiij0Zt6ADBX9CQBjbYgpFW1w172iIOrwIZaoxtULQPaoVILtvqfvZ3vvtJ4onTurhLdTUL9AZCuQCg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Balancing the Causal Effects in Class-Incremental Learning</title><source>arXiv.org</source><creator>Zheng, Junhao ; Wang, Ruiyan ; Zhang, Chongzhi ; Feng, Huawen ; Ma, Qianli</creator><creatorcontrib>Zheng, Junhao ; Wang, Ruiyan ; Zhang, Chongzhi ; Feng, Huawen ; Ma, Qianli</creatorcontrib><description>Class-Incremental Learning (CIL) is a practical and challenging problem for achieving general artificial intelligence. Recently, Pre-Trained Models (PTMs) have led to breakthroughs in both visual and natural language processing tasks. Despite recent studies showing PTMs' potential ability to learn sequentially, a plethora of work indicates the necessity of alleviating the catastrophic forgetting of PTMs. Through a pilot study and a causal analysis of CIL, we reveal that the crux lies in the imbalanced causal effects between new and old data. Specifically, the new data encourage models to adapt to new classes while hindering the adaptation of old classes. Similarly, the old data encourages models to adapt to old classes while hindering the adaptation of new classes. In other words, the adaptation process between new and old classes conflicts from the causal perspective. To alleviate this problem, we propose Balancing the Causal Effects (BaCE) in CIL. Concretely, BaCE proposes two objectives for building causal paths from both new and old data to the prediction of new and classes, respectively. In this way, the model is encouraged to adapt to all classes with causal effects from both new and old data and thus alleviates the causal imbalance problem. We conduct extensive experiments on continual image classification, continual text classification, and continual named entity recognition. Empirical results show that BaCE outperforms a series of CIL methods on different tasks and settings.</description><identifier>DOI: 10.48550/arxiv.2402.10063</identifier><language>eng</language><subject>Computer Science - Learning</subject><creationdate>2024-02</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2402.10063$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2402.10063$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Zheng, Junhao</creatorcontrib><creatorcontrib>Wang, Ruiyan</creatorcontrib><creatorcontrib>Zhang, Chongzhi</creatorcontrib><creatorcontrib>Feng, Huawen</creatorcontrib><creatorcontrib>Ma, Qianli</creatorcontrib><title>Balancing the Causal Effects in Class-Incremental Learning</title><description>Class-Incremental Learning (CIL) is a practical and challenging problem for achieving general artificial intelligence. Recently, Pre-Trained Models (PTMs) have led to breakthroughs in both visual and natural language processing tasks. Despite recent studies showing PTMs' potential ability to learn sequentially, a plethora of work indicates the necessity of alleviating the catastrophic forgetting of PTMs. Through a pilot study and a causal analysis of CIL, we reveal that the crux lies in the imbalanced causal effects between new and old data. Specifically, the new data encourage models to adapt to new classes while hindering the adaptation of old classes. Similarly, the old data encourages models to adapt to old classes while hindering the adaptation of new classes. In other words, the adaptation process between new and old classes conflicts from the causal perspective. To alleviate this problem, we propose Balancing the Causal Effects (BaCE) in CIL. Concretely, BaCE proposes two objectives for building causal paths from both new and old data to the prediction of new and classes, respectively. In this way, the model is encouraged to adapt to all classes with causal effects from both new and old data and thus alleviates the causal imbalance problem. We conduct extensive experiments on continual image classification, continual text classification, and continual named entity recognition. Empirical results show that BaCE outperforms a series of CIL methods on different tasks and settings.</description><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj7FOwzAURb0woNIPYKp_IMHP9rMTtjYqtFKkLt2jF8eGSKmF7IDo39OWTme4ulf3MPYMotQVonih9Dv-lFILWYIQRj2y1w1NFN0YP_j86XlD35kmvg3BuznzMfJmopyLfXTJn3ycL2HrKcVL4Yk9BJqyX965YMe37bHZFe3hfd-s24KMVYXupQPwKiij0Zt6ADBX9CQBjbYgpFW1w172iIOrwIZaoxtULQPaoVILtvqfvZ3vvtJ4onTurhLdTUL9AZCuQCg</recordid><startdate>20240215</startdate><enddate>20240215</enddate><creator>Zheng, Junhao</creator><creator>Wang, Ruiyan</creator><creator>Zhang, Chongzhi</creator><creator>Feng, Huawen</creator><creator>Ma, Qianli</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240215</creationdate><title>Balancing the Causal Effects in Class-Incremental Learning</title><author>Zheng, Junhao ; Wang, Ruiyan ; Zhang, Chongzhi ; Feng, Huawen ; Ma, Qianli</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a673-4b2c11e3f3645e69d116e69dba215647102739c5b2b55dc817f945cd392f57d83</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Zheng, Junhao</creatorcontrib><creatorcontrib>Wang, Ruiyan</creatorcontrib><creatorcontrib>Zhang, Chongzhi</creatorcontrib><creatorcontrib>Feng, Huawen</creatorcontrib><creatorcontrib>Ma, Qianli</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zheng, Junhao</au><au>Wang, Ruiyan</au><au>Zhang, Chongzhi</au><au>Feng, Huawen</au><au>Ma, Qianli</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Balancing the Causal Effects in Class-Incremental Learning</atitle><date>2024-02-15</date><risdate>2024</risdate><abstract>Class-Incremental Learning (CIL) is a practical and challenging problem for achieving general artificial intelligence. Recently, Pre-Trained Models (PTMs) have led to breakthroughs in both visual and natural language processing tasks. Despite recent studies showing PTMs' potential ability to learn sequentially, a plethora of work indicates the necessity of alleviating the catastrophic forgetting of PTMs. Through a pilot study and a causal analysis of CIL, we reveal that the crux lies in the imbalanced causal effects between new and old data. Specifically, the new data encourage models to adapt to new classes while hindering the adaptation of old classes. Similarly, the old data encourages models to adapt to old classes while hindering the adaptation of new classes. In other words, the adaptation process between new and old classes conflicts from the causal perspective. To alleviate this problem, we propose Balancing the Causal Effects (BaCE) in CIL. Concretely, BaCE proposes two objectives for building causal paths from both new and old data to the prediction of new and classes, respectively. In this way, the model is encouraged to adapt to all classes with causal effects from both new and old data and thus alleviates the causal imbalance problem. We conduct extensive experiments on continual image classification, continual text classification, and continual named entity recognition. Empirical results show that BaCE outperforms a series of CIL methods on different tasks and settings.</abstract><doi>10.48550/arxiv.2402.10063</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2402.10063
ispartof
issn
language eng
recordid cdi_arxiv_primary_2402_10063
source arXiv.org
subjects Computer Science - Learning
title Balancing the Causal Effects in Class-Incremental Learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-26T16%3A47%3A18IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Balancing%20the%20Causal%20Effects%20in%20Class-Incremental%20Learning&rft.au=Zheng,%20Junhao&rft.date=2024-02-15&rft_id=info:doi/10.48550/arxiv.2402.10063&rft_dat=%3Carxiv_GOX%3E2402_10063%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true