Gradual Divergence for Seamless Adaptation: A Novel Domain Incremental Learning Method

Domain incremental learning (DIL) poses a significant challenge in real-world scenarios, as models need to be sequentially trained on diverse domains over time, all the while avoiding catastrophic forgetting. Mitigating representation drift, which refers to the phenomenon of learned representations...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Jeeveswaran, Kishaan, Arani, Elahe, Zonooz, Bahram
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Jeeveswaran, Kishaan
Arani, Elahe
Zonooz, Bahram
description Domain incremental learning (DIL) poses a significant challenge in real-world scenarios, as models need to be sequentially trained on diverse domains over time, all the while avoiding catastrophic forgetting. Mitigating representation drift, which refers to the phenomenon of learned representations undergoing changes as the model adapts to new tasks, can help alleviate catastrophic forgetting. In this study, we propose a novel DIL method named DARE, featuring a three-stage training process: Divergence, Adaptation, and REfinement. This process gradually adapts the representations associated with new tasks into the feature space spanned by samples from previous tasks, simultaneously integrating task-specific decision boundaries. Additionally, we introduce a novel strategy for buffer sampling and demonstrate the effectiveness of our proposed method, combined with this sampling strategy, in reducing representation drift within the feature encoder. This contribution effectively alleviates catastrophic forgetting across multiple DIL benchmarks. Furthermore, our approach prevents sudden representation drift at task boundaries, resulting in a well-calibrated DIL model that maintains the performance on previous tasks.
doi_str_mv 10.48550/arxiv.2406.16231
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2406_16231</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2406_16231</sourcerecordid><originalsourceid>FETCH-LOGICAL-a671-2539cb57ecb65277f48e41ff083adfc512246874fe8373c7e1b907fb1457335a3</originalsourceid><addsrcrecordid>eNotz71OwzAUBWAvDKjwAEz1CyTEf7HLFhUolQIMrVijG-e6WErsygkRvD2lZTrLOUf6CLljRS6NUsU9pG8_51wWZc5KLtg1-dgk6L6gp49-xnTAYJG6mOgOYehxHGnVwXGCycfwQCv6Fmc8deMAPtBtsAkHDNNpXiOk4MOBvuL0GbsbcuWgH_H2Pxdk__y0X79k9ftmu67qDErNMq7EyrZKo21LxbV20qBkzhVGQOesYpzL0mjp0AgtrEbWrgrtWiaVFkKBWJDl5fYMa47JD5B-mj9gcwaKX9TTSrE</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Gradual Divergence for Seamless Adaptation: A Novel Domain Incremental Learning Method</title><source>arXiv.org</source><creator>Jeeveswaran, Kishaan ; Arani, Elahe ; Zonooz, Bahram</creator><creatorcontrib>Jeeveswaran, Kishaan ; Arani, Elahe ; Zonooz, Bahram</creatorcontrib><description>Domain incremental learning (DIL) poses a significant challenge in real-world scenarios, as models need to be sequentially trained on diverse domains over time, all the while avoiding catastrophic forgetting. Mitigating representation drift, which refers to the phenomenon of learned representations undergoing changes as the model adapts to new tasks, can help alleviate catastrophic forgetting. In this study, we propose a novel DIL method named DARE, featuring a three-stage training process: Divergence, Adaptation, and REfinement. This process gradually adapts the representations associated with new tasks into the feature space spanned by samples from previous tasks, simultaneously integrating task-specific decision boundaries. Additionally, we introduce a novel strategy for buffer sampling and demonstrate the effectiveness of our proposed method, combined with this sampling strategy, in reducing representation drift within the feature encoder. This contribution effectively alleviates catastrophic forgetting across multiple DIL benchmarks. Furthermore, our approach prevents sudden representation drift at task boundaries, resulting in a well-calibrated DIL model that maintains the performance on previous tasks.</description><identifier>DOI: 10.48550/arxiv.2406.16231</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Learning</subject><creationdate>2024-06</creationdate><rights>http://creativecommons.org/licenses/by-sa/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2406.16231$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2406.16231$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Jeeveswaran, Kishaan</creatorcontrib><creatorcontrib>Arani, Elahe</creatorcontrib><creatorcontrib>Zonooz, Bahram</creatorcontrib><title>Gradual Divergence for Seamless Adaptation: A Novel Domain Incremental Learning Method</title><description>Domain incremental learning (DIL) poses a significant challenge in real-world scenarios, as models need to be sequentially trained on diverse domains over time, all the while avoiding catastrophic forgetting. Mitigating representation drift, which refers to the phenomenon of learned representations undergoing changes as the model adapts to new tasks, can help alleviate catastrophic forgetting. In this study, we propose a novel DIL method named DARE, featuring a three-stage training process: Divergence, Adaptation, and REfinement. This process gradually adapts the representations associated with new tasks into the feature space spanned by samples from previous tasks, simultaneously integrating task-specific decision boundaries. Additionally, we introduce a novel strategy for buffer sampling and demonstrate the effectiveness of our proposed method, combined with this sampling strategy, in reducing representation drift within the feature encoder. This contribution effectively alleviates catastrophic forgetting across multiple DIL benchmarks. Furthermore, our approach prevents sudden representation drift at task boundaries, resulting in a well-calibrated DIL model that maintains the performance on previous tasks.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz71OwzAUBWAvDKjwAEz1CyTEf7HLFhUolQIMrVijG-e6WErsygkRvD2lZTrLOUf6CLljRS6NUsU9pG8_51wWZc5KLtg1-dgk6L6gp49-xnTAYJG6mOgOYehxHGnVwXGCycfwQCv6Fmc8deMAPtBtsAkHDNNpXiOk4MOBvuL0GbsbcuWgH_H2Pxdk__y0X79k9ftmu67qDErNMq7EyrZKo21LxbV20qBkzhVGQOesYpzL0mjp0AgtrEbWrgrtWiaVFkKBWJDl5fYMa47JD5B-mj9gcwaKX9TTSrE</recordid><startdate>20240623</startdate><enddate>20240623</enddate><creator>Jeeveswaran, Kishaan</creator><creator>Arani, Elahe</creator><creator>Zonooz, Bahram</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240623</creationdate><title>Gradual Divergence for Seamless Adaptation: A Novel Domain Incremental Learning Method</title><author>Jeeveswaran, Kishaan ; Arani, Elahe ; Zonooz, Bahram</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a671-2539cb57ecb65277f48e41ff083adfc512246874fe8373c7e1b907fb1457335a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Jeeveswaran, Kishaan</creatorcontrib><creatorcontrib>Arani, Elahe</creatorcontrib><creatorcontrib>Zonooz, Bahram</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Jeeveswaran, Kishaan</au><au>Arani, Elahe</au><au>Zonooz, Bahram</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Gradual Divergence for Seamless Adaptation: A Novel Domain Incremental Learning Method</atitle><date>2024-06-23</date><risdate>2024</risdate><abstract>Domain incremental learning (DIL) poses a significant challenge in real-world scenarios, as models need to be sequentially trained on diverse domains over time, all the while avoiding catastrophic forgetting. Mitigating representation drift, which refers to the phenomenon of learned representations undergoing changes as the model adapts to new tasks, can help alleviate catastrophic forgetting. In this study, we propose a novel DIL method named DARE, featuring a three-stage training process: Divergence, Adaptation, and REfinement. This process gradually adapts the representations associated with new tasks into the feature space spanned by samples from previous tasks, simultaneously integrating task-specific decision boundaries. Additionally, we introduce a novel strategy for buffer sampling and demonstrate the effectiveness of our proposed method, combined with this sampling strategy, in reducing representation drift within the feature encoder. This contribution effectively alleviates catastrophic forgetting across multiple DIL benchmarks. Furthermore, our approach prevents sudden representation drift at task boundaries, resulting in a well-calibrated DIL model that maintains the performance on previous tasks.</abstract><doi>10.48550/arxiv.2406.16231</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2406.16231
ispartof
issn
language eng
recordid cdi_arxiv_primary_2406_16231
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computer Vision and Pattern Recognition
Computer Science - Learning
title Gradual Divergence for Seamless Adaptation: A Novel Domain Incremental Learning Method
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-10T10%3A35%3A37IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Gradual%20Divergence%20for%20Seamless%20Adaptation:%20A%20Novel%20Domain%20Incremental%20Learning%20Method&rft.au=Jeeveswaran,%20Kishaan&rft.date=2024-06-23&rft_id=info:doi/10.48550/arxiv.2406.16231&rft_dat=%3Carxiv_GOX%3E2406_16231%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true