Active Continual Learning: On Balancing Knowledge Retention and Learnability

Acquiring new knowledge without forgetting what has been learned in a sequence of tasks is the central focus of continual learning (CL). While tasks arrive sequentially, the training data are often prepared and annotated independently, leading to the CL of incoming supervised learning tasks. This pa...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Vu, Thuy-Trang, Khadivi, Shahram, Ghorbanali, Mahsa, Phung, Dinh, Haffari, Gholamreza
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Vu, Thuy-Trang
Khadivi, Shahram
Ghorbanali, Mahsa
Phung, Dinh
Haffari, Gholamreza
description Acquiring new knowledge without forgetting what has been learned in a sequence of tasks is the central focus of continual learning (CL). While tasks arrive sequentially, the training data are often prepared and annotated independently, leading to the CL of incoming supervised learning tasks. This paper considers the under-explored problem of active continual learning (ACL) for a sequence of active learning (AL) tasks, where each incoming task includes a pool of unlabelled data and an annotation budget. We investigate the effectiveness and interplay between several AL and CL algorithms in the domain, class and task-incremental scenarios. Our experiments reveal the trade-off between two contrasting goals of not forgetting the old knowledge and the ability to quickly learn new knowledge in CL and AL, respectively. While conditioning the AL query strategy on the annotations collected for the previous tasks leads to improved task performance on the domain and task incremental learning, our proposed forgetting-learning profile suggests a gap in balancing the effect of AL and CL for the class-incremental scenario.
doi_str_mv 10.48550/arxiv.2305.03923
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2305_03923</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2305_03923</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2305_039233</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjYw1TMwtjQy5mTwcUwuySxLVXDOzyvJzCtNzFHwSU0sysvMS7dS8M9TcErMScxLBvIUvPPyy3NSU9JTFYJSS1KBivPzFBLzUiDKE5MyczJLKnkYWNMSc4pTeaE0N4O8m2uIs4cu2N74gqLM3MSiyniQ_fFg-40JqwAAzPM66w</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Active Continual Learning: On Balancing Knowledge Retention and Learnability</title><source>arXiv.org</source><creator>Vu, Thuy-Trang ; Khadivi, Shahram ; Ghorbanali, Mahsa ; Phung, Dinh ; Haffari, Gholamreza</creator><creatorcontrib>Vu, Thuy-Trang ; Khadivi, Shahram ; Ghorbanali, Mahsa ; Phung, Dinh ; Haffari, Gholamreza</creatorcontrib><description>Acquiring new knowledge without forgetting what has been learned in a sequence of tasks is the central focus of continual learning (CL). While tasks arrive sequentially, the training data are often prepared and annotated independently, leading to the CL of incoming supervised learning tasks. This paper considers the under-explored problem of active continual learning (ACL) for a sequence of active learning (AL) tasks, where each incoming task includes a pool of unlabelled data and an annotation budget. We investigate the effectiveness and interplay between several AL and CL algorithms in the domain, class and task-incremental scenarios. Our experiments reveal the trade-off between two contrasting goals of not forgetting the old knowledge and the ability to quickly learn new knowledge in CL and AL, respectively. While conditioning the AL query strategy on the annotations collected for the previous tasks leads to improved task performance on the domain and task incremental learning, our proposed forgetting-learning profile suggests a gap in balancing the effect of AL and CL for the class-incremental scenario.</description><identifier>DOI: 10.48550/arxiv.2305.03923</identifier><language>eng</language><subject>Computer Science - Computation and Language ; Computer Science - Learning</subject><creationdate>2023-05</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2305.03923$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2305.03923$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Vu, Thuy-Trang</creatorcontrib><creatorcontrib>Khadivi, Shahram</creatorcontrib><creatorcontrib>Ghorbanali, Mahsa</creatorcontrib><creatorcontrib>Phung, Dinh</creatorcontrib><creatorcontrib>Haffari, Gholamreza</creatorcontrib><title>Active Continual Learning: On Balancing Knowledge Retention and Learnability</title><description>Acquiring new knowledge without forgetting what has been learned in a sequence of tasks is the central focus of continual learning (CL). While tasks arrive sequentially, the training data are often prepared and annotated independently, leading to the CL of incoming supervised learning tasks. This paper considers the under-explored problem of active continual learning (ACL) for a sequence of active learning (AL) tasks, where each incoming task includes a pool of unlabelled data and an annotation budget. We investigate the effectiveness and interplay between several AL and CL algorithms in the domain, class and task-incremental scenarios. Our experiments reveal the trade-off between two contrasting goals of not forgetting the old knowledge and the ability to quickly learn new knowledge in CL and AL, respectively. While conditioning the AL query strategy on the annotations collected for the previous tasks leads to improved task performance on the domain and task incremental learning, our proposed forgetting-learning profile suggests a gap in balancing the effect of AL and CL for the class-incremental scenario.</description><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjYw1TMwtjQy5mTwcUwuySxLVXDOzyvJzCtNzFHwSU0sysvMS7dS8M9TcErMScxLBvIUvPPyy3NSU9JTFYJSS1KBivPzFBLzUiDKE5MyczJLKnkYWNMSc4pTeaE0N4O8m2uIs4cu2N74gqLM3MSiyniQ_fFg-40JqwAAzPM66w</recordid><startdate>20230506</startdate><enddate>20230506</enddate><creator>Vu, Thuy-Trang</creator><creator>Khadivi, Shahram</creator><creator>Ghorbanali, Mahsa</creator><creator>Phung, Dinh</creator><creator>Haffari, Gholamreza</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230506</creationdate><title>Active Continual Learning: On Balancing Knowledge Retention and Learnability</title><author>Vu, Thuy-Trang ; Khadivi, Shahram ; Ghorbanali, Mahsa ; Phung, Dinh ; Haffari, Gholamreza</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2305_039233</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Vu, Thuy-Trang</creatorcontrib><creatorcontrib>Khadivi, Shahram</creatorcontrib><creatorcontrib>Ghorbanali, Mahsa</creatorcontrib><creatorcontrib>Phung, Dinh</creatorcontrib><creatorcontrib>Haffari, Gholamreza</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Vu, Thuy-Trang</au><au>Khadivi, Shahram</au><au>Ghorbanali, Mahsa</au><au>Phung, Dinh</au><au>Haffari, Gholamreza</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Active Continual Learning: On Balancing Knowledge Retention and Learnability</atitle><date>2023-05-06</date><risdate>2023</risdate><abstract>Acquiring new knowledge without forgetting what has been learned in a sequence of tasks is the central focus of continual learning (CL). While tasks arrive sequentially, the training data are often prepared and annotated independently, leading to the CL of incoming supervised learning tasks. This paper considers the under-explored problem of active continual learning (ACL) for a sequence of active learning (AL) tasks, where each incoming task includes a pool of unlabelled data and an annotation budget. We investigate the effectiveness and interplay between several AL and CL algorithms in the domain, class and task-incremental scenarios. Our experiments reveal the trade-off between two contrasting goals of not forgetting the old knowledge and the ability to quickly learn new knowledge in CL and AL, respectively. While conditioning the AL query strategy on the annotations collected for the previous tasks leads to improved task performance on the domain and task incremental learning, our proposed forgetting-learning profile suggests a gap in balancing the effect of AL and CL for the class-incremental scenario.</abstract><doi>10.48550/arxiv.2305.03923</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2305.03923
ispartof
issn
language eng
recordid cdi_arxiv_primary_2305_03923
source arXiv.org
subjects Computer Science - Computation and Language
Computer Science - Learning
title Active Continual Learning: On Balancing Knowledge Retention and Learnability
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-09T21%3A14%3A38IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Active%20Continual%20Learning:%20On%20Balancing%20Knowledge%20Retention%20and%20Learnability&rft.au=Vu,%20Thuy-Trang&rft.date=2023-05-06&rft_id=info:doi/10.48550/arxiv.2305.03923&rft_dat=%3Carxiv_GOX%3E2305_03923%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true