An Effective Dynamic Gradient Calibration Method for Continual Learning
Continual learning (CL) is a fundamental topic in machine learning, where the goal is to train a model with continuously incoming data and tasks. Due to the memory limit, we cannot store all the historical data, and therefore confront the ``catastrophic forgetting'' problem, i.e., the perf...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Lin, Weichen Chen, Jiaxiang Huang, Ruomin Ding, Hu |
description | Continual learning (CL) is a fundamental topic in machine learning, where the
goal is to train a model with continuously incoming data and tasks. Due to the
memory limit, we cannot store all the historical data, and therefore confront
the ``catastrophic forgetting'' problem, i.e., the performance on the previous
tasks can substantially decrease because of the missing information in the
latter period. Though a number of elegant methods have been proposed, the
catastrophic forgetting phenomenon still cannot be well avoided in practice. In
this paper, we study the problem from the gradient perspective, where our aim
is to develop an effective algorithm to calibrate the gradient in each updating
step of the model; namely, our goal is to guide the model to be updated in the
right direction under the situation that a large amount of historical data are
unavailable. Our idea is partly inspired by the seminal stochastic variance
reduction methods (e.g., SVRG and SAGA) for reducing the variance of gradient
estimation in stochastic gradient descent algorithms. Another benefit is that
our approach can be used as a general tool, which is able to be incorporated
with several existing popular CL methods to achieve better performance. We also
conduct a set of experiments on several benchmark datasets to evaluate the
performance in practice. |
doi_str_mv | 10.48550/arxiv.2407.20956 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2407_20956</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2407_20956</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2407_209563</originalsourceid><addsrcrecordid>eNqFzT0OgkAQQOFtLIx6ACvnAuKK4E9pELHQjp6MsKuTwKwZVyK3NxJ7q9e85FNqutRBtI1jvUB5UxuEkd4Eod7F66HK9gyptab01Bo4dIwNlZAJVmTYQ4I1XQU9OYaL8XdXgXUCiWNP_MIazgaFiW9jNbBYP83k15GaHdM8Oc17sngINShd8aWLnl79Pz57UzkE</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>An Effective Dynamic Gradient Calibration Method for Continual Learning</title><source>arXiv.org</source><creator>Lin, Weichen ; Chen, Jiaxiang ; Huang, Ruomin ; Ding, Hu</creator><creatorcontrib>Lin, Weichen ; Chen, Jiaxiang ; Huang, Ruomin ; Ding, Hu</creatorcontrib><description>Continual learning (CL) is a fundamental topic in machine learning, where the
goal is to train a model with continuously incoming data and tasks. Due to the
memory limit, we cannot store all the historical data, and therefore confront
the ``catastrophic forgetting'' problem, i.e., the performance on the previous
tasks can substantially decrease because of the missing information in the
latter period. Though a number of elegant methods have been proposed, the
catastrophic forgetting phenomenon still cannot be well avoided in practice. In
this paper, we study the problem from the gradient perspective, where our aim
is to develop an effective algorithm to calibrate the gradient in each updating
step of the model; namely, our goal is to guide the model to be updated in the
right direction under the situation that a large amount of historical data are
unavailable. Our idea is partly inspired by the seminal stochastic variance
reduction methods (e.g., SVRG and SAGA) for reducing the variance of gradient
estimation in stochastic gradient descent algorithms. Another benefit is that
our approach can be used as a general tool, which is able to be incorporated
with several existing popular CL methods to achieve better performance. We also
conduct a set of experiments on several benchmark datasets to evaluate the
performance in practice.</description><identifier>DOI: 10.48550/arxiv.2407.20956</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Learning</subject><creationdate>2024-07</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2407.20956$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2407.20956$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Lin, Weichen</creatorcontrib><creatorcontrib>Chen, Jiaxiang</creatorcontrib><creatorcontrib>Huang, Ruomin</creatorcontrib><creatorcontrib>Ding, Hu</creatorcontrib><title>An Effective Dynamic Gradient Calibration Method for Continual Learning</title><description>Continual learning (CL) is a fundamental topic in machine learning, where the
goal is to train a model with continuously incoming data and tasks. Due to the
memory limit, we cannot store all the historical data, and therefore confront
the ``catastrophic forgetting'' problem, i.e., the performance on the previous
tasks can substantially decrease because of the missing information in the
latter period. Though a number of elegant methods have been proposed, the
catastrophic forgetting phenomenon still cannot be well avoided in practice. In
this paper, we study the problem from the gradient perspective, where our aim
is to develop an effective algorithm to calibrate the gradient in each updating
step of the model; namely, our goal is to guide the model to be updated in the
right direction under the situation that a large amount of historical data are
unavailable. Our idea is partly inspired by the seminal stochastic variance
reduction methods (e.g., SVRG and SAGA) for reducing the variance of gradient
estimation in stochastic gradient descent algorithms. Another benefit is that
our approach can be used as a general tool, which is able to be incorporated
with several existing popular CL methods to achieve better performance. We also
conduct a set of experiments on several benchmark datasets to evaluate the
performance in practice.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNqFzT0OgkAQQOFtLIx6ACvnAuKK4E9pELHQjp6MsKuTwKwZVyK3NxJ7q9e85FNqutRBtI1jvUB5UxuEkd4Eod7F66HK9gyptab01Bo4dIwNlZAJVmTYQ4I1XQU9OYaL8XdXgXUCiWNP_MIazgaFiW9jNbBYP83k15GaHdM8Oc17sngINShd8aWLnl79Pz57UzkE</recordid><startdate>20240730</startdate><enddate>20240730</enddate><creator>Lin, Weichen</creator><creator>Chen, Jiaxiang</creator><creator>Huang, Ruomin</creator><creator>Ding, Hu</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240730</creationdate><title>An Effective Dynamic Gradient Calibration Method for Continual Learning</title><author>Lin, Weichen ; Chen, Jiaxiang ; Huang, Ruomin ; Ding, Hu</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2407_209563</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Lin, Weichen</creatorcontrib><creatorcontrib>Chen, Jiaxiang</creatorcontrib><creatorcontrib>Huang, Ruomin</creatorcontrib><creatorcontrib>Ding, Hu</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Lin, Weichen</au><au>Chen, Jiaxiang</au><au>Huang, Ruomin</au><au>Ding, Hu</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>An Effective Dynamic Gradient Calibration Method for Continual Learning</atitle><date>2024-07-30</date><risdate>2024</risdate><abstract>Continual learning (CL) is a fundamental topic in machine learning, where the
goal is to train a model with continuously incoming data and tasks. Due to the
memory limit, we cannot store all the historical data, and therefore confront
the ``catastrophic forgetting'' problem, i.e., the performance on the previous
tasks can substantially decrease because of the missing information in the
latter period. Though a number of elegant methods have been proposed, the
catastrophic forgetting phenomenon still cannot be well avoided in practice. In
this paper, we study the problem from the gradient perspective, where our aim
is to develop an effective algorithm to calibrate the gradient in each updating
step of the model; namely, our goal is to guide the model to be updated in the
right direction under the situation that a large amount of historical data are
unavailable. Our idea is partly inspired by the seminal stochastic variance
reduction methods (e.g., SVRG and SAGA) for reducing the variance of gradient
estimation in stochastic gradient descent algorithms. Another benefit is that
our approach can be used as a general tool, which is able to be incorporated
with several existing popular CL methods to achieve better performance. We also
conduct a set of experiments on several benchmark datasets to evaluate the
performance in practice.</abstract><doi>10.48550/arxiv.2407.20956</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2407.20956 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2407_20956 |
source | arXiv.org |
subjects | Computer Science - Artificial Intelligence Computer Science - Learning |
title | An Effective Dynamic Gradient Calibration Method for Continual Learning |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-23T01%3A43%3A36IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=An%20Effective%20Dynamic%20Gradient%20Calibration%20Method%20for%20Continual%20Learning&rft.au=Lin,%20Weichen&rft.date=2024-07-30&rft_id=info:doi/10.48550/arxiv.2407.20956&rft_dat=%3Carxiv_GOX%3E2407_20956%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |