Linked Adapters: Linking Past and Future to Present for Effective Continual Learning
Continual learning allows the system to learn and adapt to new tasks while retaining the knowledge acquired from previous tasks. However, deep learning models suffer from catastrophic forgetting of knowledge learned from earlier tasks while learning a new task. Moreover, retraining large models like...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2024-12 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Dupati, Srikar Chandra Srijith, P K Rezazadegan, Dana McCarthy, Chris |
description | Continual learning allows the system to learn and adapt to new tasks while retaining the knowledge acquired from previous tasks. However, deep learning models suffer from catastrophic forgetting of knowledge learned from earlier tasks while learning a new task. Moreover, retraining large models like transformers from scratch for every new task is costly. An effective approach to address continual learning is to use a large pre-trained model with task-specific adapters to adapt to the new tasks. Though this approach can mitigate catastrophic forgetting, they fail to transfer knowledge across tasks as each task is learning adapters separately. To address this, we propose a novel approach Linked Adapters that allows knowledge transfer through a weighted attention mechanism to other task-specific adapters. Linked adapters use a multi-layer perceptron (MLP) to model the attention weights, which overcomes the challenge of backward knowledge transfer in continual learning in addition to modeling the forward knowledge transfer. During inference, our proposed approach effectively leverages knowledge transfer through MLP-based attention weights across all the lateral task adapters. Through numerous experiments conducted on diverse image classification datasets, we effectively demonstrated the improvement in performance on the continual learning tasks using Linked Adapters. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3145904207</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3145904207</sourcerecordid><originalsourceid>FETCH-proquest_journals_31459042073</originalsourceid><addsrcrecordid>eNqNi8EKgkAURYcgSMp_eNBaGGc0q12I0sKFC_cx5JvQZMZm3vT9GfQBrS6cc-6KRULKNDlmQmxY7P3IOReHQuS5jFjXDOaJPVx6NRM6f4YvGMwDWuUJlOmhDhQcAlloHXo0BNo6qLTGOw1vhNIaGkxQEzSonFm-O7bWavIY_3bL9nXVlddkdvYV0NNttMGZRd1kmuUnngleyP-qDxuVP88</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3145904207</pqid></control><display><type>article</type><title>Linked Adapters: Linking Past and Future to Present for Effective Continual Learning</title><source>Free E- Journals</source><creator>Dupati, Srikar Chandra ; Srijith, P K ; Rezazadegan, Dana ; McCarthy, Chris</creator><creatorcontrib>Dupati, Srikar Chandra ; Srijith, P K ; Rezazadegan, Dana ; McCarthy, Chris</creatorcontrib><description>Continual learning allows the system to learn and adapt to new tasks while retaining the knowledge acquired from previous tasks. However, deep learning models suffer from catastrophic forgetting of knowledge learned from earlier tasks while learning a new task. Moreover, retraining large models like transformers from scratch for every new task is costly. An effective approach to address continual learning is to use a large pre-trained model with task-specific adapters to adapt to the new tasks. Though this approach can mitigate catastrophic forgetting, they fail to transfer knowledge across tasks as each task is learning adapters separately. To address this, we propose a novel approach Linked Adapters that allows knowledge transfer through a weighted attention mechanism to other task-specific adapters. Linked adapters use a multi-layer perceptron (MLP) to model the attention weights, which overcomes the challenge of backward knowledge transfer in continual learning in addition to modeling the forward knowledge transfer. During inference, our proposed approach effectively leverages knowledge transfer through MLP-based attention weights across all the lateral task adapters. Through numerous experiments conducted on diverse image classification datasets, we effectively demonstrated the improvement in performance on the continual learning tasks using Linked Adapters.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Adapters ; Attention ; Cognitive tasks ; Deep learning ; Image classification ; Knowledge ; Knowledge management ; Multilayer perceptrons ; Multilayers</subject><ispartof>arXiv.org, 2024-12</ispartof><rights>2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Dupati, Srikar Chandra</creatorcontrib><creatorcontrib>Srijith, P K</creatorcontrib><creatorcontrib>Rezazadegan, Dana</creatorcontrib><creatorcontrib>McCarthy, Chris</creatorcontrib><title>Linked Adapters: Linking Past and Future to Present for Effective Continual Learning</title><title>arXiv.org</title><description>Continual learning allows the system to learn and adapt to new tasks while retaining the knowledge acquired from previous tasks. However, deep learning models suffer from catastrophic forgetting of knowledge learned from earlier tasks while learning a new task. Moreover, retraining large models like transformers from scratch for every new task is costly. An effective approach to address continual learning is to use a large pre-trained model with task-specific adapters to adapt to the new tasks. Though this approach can mitigate catastrophic forgetting, they fail to transfer knowledge across tasks as each task is learning adapters separately. To address this, we propose a novel approach Linked Adapters that allows knowledge transfer through a weighted attention mechanism to other task-specific adapters. Linked adapters use a multi-layer perceptron (MLP) to model the attention weights, which overcomes the challenge of backward knowledge transfer in continual learning in addition to modeling the forward knowledge transfer. During inference, our proposed approach effectively leverages knowledge transfer through MLP-based attention weights across all the lateral task adapters. Through numerous experiments conducted on diverse image classification datasets, we effectively demonstrated the improvement in performance on the continual learning tasks using Linked Adapters.</description><subject>Adapters</subject><subject>Attention</subject><subject>Cognitive tasks</subject><subject>Deep learning</subject><subject>Image classification</subject><subject>Knowledge</subject><subject>Knowledge management</subject><subject>Multilayer perceptrons</subject><subject>Multilayers</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNqNi8EKgkAURYcgSMp_eNBaGGc0q12I0sKFC_cx5JvQZMZm3vT9GfQBrS6cc-6KRULKNDlmQmxY7P3IOReHQuS5jFjXDOaJPVx6NRM6f4YvGMwDWuUJlOmhDhQcAlloHXo0BNo6qLTGOw1vhNIaGkxQEzSonFm-O7bWavIY_3bL9nXVlddkdvYV0NNttMGZRd1kmuUnngleyP-qDxuVP88</recordid><startdate>20241214</startdate><enddate>20241214</enddate><creator>Dupati, Srikar Chandra</creator><creator>Srijith, P K</creator><creator>Rezazadegan, Dana</creator><creator>McCarthy, Chris</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20241214</creationdate><title>Linked Adapters: Linking Past and Future to Present for Effective Continual Learning</title><author>Dupati, Srikar Chandra ; Srijith, P K ; Rezazadegan, Dana ; McCarthy, Chris</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_31459042073</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Adapters</topic><topic>Attention</topic><topic>Cognitive tasks</topic><topic>Deep learning</topic><topic>Image classification</topic><topic>Knowledge</topic><topic>Knowledge management</topic><topic>Multilayer perceptrons</topic><topic>Multilayers</topic><toplevel>online_resources</toplevel><creatorcontrib>Dupati, Srikar Chandra</creatorcontrib><creatorcontrib>Srijith, P K</creatorcontrib><creatorcontrib>Rezazadegan, Dana</creatorcontrib><creatorcontrib>McCarthy, Chris</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Dupati, Srikar Chandra</au><au>Srijith, P K</au><au>Rezazadegan, Dana</au><au>McCarthy, Chris</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Linked Adapters: Linking Past and Future to Present for Effective Continual Learning</atitle><jtitle>arXiv.org</jtitle><date>2024-12-14</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Continual learning allows the system to learn and adapt to new tasks while retaining the knowledge acquired from previous tasks. However, deep learning models suffer from catastrophic forgetting of knowledge learned from earlier tasks while learning a new task. Moreover, retraining large models like transformers from scratch for every new task is costly. An effective approach to address continual learning is to use a large pre-trained model with task-specific adapters to adapt to the new tasks. Though this approach can mitigate catastrophic forgetting, they fail to transfer knowledge across tasks as each task is learning adapters separately. To address this, we propose a novel approach Linked Adapters that allows knowledge transfer through a weighted attention mechanism to other task-specific adapters. Linked adapters use a multi-layer perceptron (MLP) to model the attention weights, which overcomes the challenge of backward knowledge transfer in continual learning in addition to modeling the forward knowledge transfer. During inference, our proposed approach effectively leverages knowledge transfer through MLP-based attention weights across all the lateral task adapters. Through numerous experiments conducted on diverse image classification datasets, we effectively demonstrated the improvement in performance on the continual learning tasks using Linked Adapters.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-12 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_3145904207 |
source | Free E- Journals |
subjects | Adapters Attention Cognitive tasks Deep learning Image classification Knowledge Knowledge management Multilayer perceptrons Multilayers |
title | Linked Adapters: Linking Past and Future to Present for Effective Continual Learning |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-10T00%3A09%3A29IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Linked%20Adapters:%20Linking%20Past%20and%20Future%20to%20Present%20for%20Effective%20Continual%20Learning&rft.jtitle=arXiv.org&rft.au=Dupati,%20Srikar%20Chandra&rft.date=2024-12-14&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3145904207%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3145904207&rft_id=info:pmid/&rfr_iscdi=true |