Enhancing Accuracy and Robustness through Adversarial Training in Class Incremental Continual Learning

In real life, adversarial attack to deep learning models is a fatal security issue. However, the issue has been rarely discussed in a widely used class-incremental continual learning (CICL). In this paper, we address problems of applying adversarial training to CICL, which is well-known defense meth...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Kwon, Minchan, Kim, Kangil
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Kwon, Minchan
Kim, Kangil
description In real life, adversarial attack to deep learning models is a fatal security issue. However, the issue has been rarely discussed in a widely used class-incremental continual learning (CICL). In this paper, we address problems of applying adversarial training to CICL, which is well-known defense method against adversarial attack. A well-known problem of CICL is class-imbalance that biases a model to the current task by a few samples of previous tasks. Meeting with the adversarial training, the imbalance causes another imbalance of attack trials over tasks. Lacking clean data of a minority class by the class-imbalance and increasing of attack trials from a majority class by the secondary imbalance, adversarial training distorts optimal decision boundaries. The distortion eventually decreases both accuracy and robustness than adversarial training. To exclude the effects, we propose a straightforward but significantly effective method, External Adversarial Training (EAT) which can be applied to methods using experience replay. This method conduct adversarial training to an auxiliary external model for the current task data at each time step, and applies generated adversarial examples to train the target model. We verify the effects on a toy problem and show significance on CICL benchmarks of image classification. We expect that the results will be used as the first baseline for robustness research of CICL.
doi_str_mv 10.48550/arxiv.2305.13678
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2305_13678</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2305_13678</sourcerecordid><originalsourceid>FETCH-LOGICAL-a678-107da4da9e9e8f10b3c40928a13bb3733f5a80e0cbaf02a704085e786263fe583</originalsourceid><addsrcrecordid>eNotj8FugzAQRH3JoUr7AT3VPwBdMAZzRChtIyFVirijxdjBElkiA1Hz94W0pxlpZlb7GHuNIEyUlPCO_sfdwliADCORZuqJ2QP1SNrRmRdaLx71nSN1_DS2yzSTmSY-935czj0vupvxE3qHA689OtpGjng54No6kvbmYmhe03Kk2dGyusqg33rPbGdxmMzLv-5Z_XGoy6-g-v48lkUV4PpNEEHWYdJhbnKjbASt0AnkscJItK3IhLASFRjQLVqIMYMElDSZSuNUWCOV2LO3v7MP0Obq3QX9vdmAmwew-AV8qlID</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Enhancing Accuracy and Robustness through Adversarial Training in Class Incremental Continual Learning</title><source>arXiv.org</source><creator>Kwon, Minchan ; Kim, Kangil</creator><creatorcontrib>Kwon, Minchan ; Kim, Kangil</creatorcontrib><description>In real life, adversarial attack to deep learning models is a fatal security issue. However, the issue has been rarely discussed in a widely used class-incremental continual learning (CICL). In this paper, we address problems of applying adversarial training to CICL, which is well-known defense method against adversarial attack. A well-known problem of CICL is class-imbalance that biases a model to the current task by a few samples of previous tasks. Meeting with the adversarial training, the imbalance causes another imbalance of attack trials over tasks. Lacking clean data of a minority class by the class-imbalance and increasing of attack trials from a majority class by the secondary imbalance, adversarial training distorts optimal decision boundaries. The distortion eventually decreases both accuracy and robustness than adversarial training. To exclude the effects, we propose a straightforward but significantly effective method, External Adversarial Training (EAT) which can be applied to methods using experience replay. This method conduct adversarial training to an auxiliary external model for the current task data at each time step, and applies generated adversarial examples to train the target model. We verify the effects on a toy problem and show significance on CICL benchmarks of image classification. We expect that the results will be used as the first baseline for robustness research of CICL.</description><identifier>DOI: 10.48550/arxiv.2305.13678</identifier><language>eng</language><subject>Computer Science - Learning</subject><creationdate>2023-05</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2305.13678$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2305.13678$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Kwon, Minchan</creatorcontrib><creatorcontrib>Kim, Kangil</creatorcontrib><title>Enhancing Accuracy and Robustness through Adversarial Training in Class Incremental Continual Learning</title><description>In real life, adversarial attack to deep learning models is a fatal security issue. However, the issue has been rarely discussed in a widely used class-incremental continual learning (CICL). In this paper, we address problems of applying adversarial training to CICL, which is well-known defense method against adversarial attack. A well-known problem of CICL is class-imbalance that biases a model to the current task by a few samples of previous tasks. Meeting with the adversarial training, the imbalance causes another imbalance of attack trials over tasks. Lacking clean data of a minority class by the class-imbalance and increasing of attack trials from a majority class by the secondary imbalance, adversarial training distorts optimal decision boundaries. The distortion eventually decreases both accuracy and robustness than adversarial training. To exclude the effects, we propose a straightforward but significantly effective method, External Adversarial Training (EAT) which can be applied to methods using experience replay. This method conduct adversarial training to an auxiliary external model for the current task data at each time step, and applies generated adversarial examples to train the target model. We verify the effects on a toy problem and show significance on CICL benchmarks of image classification. We expect that the results will be used as the first baseline for robustness research of CICL.</description><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8FugzAQRH3JoUr7AT3VPwBdMAZzRChtIyFVirijxdjBElkiA1Hz94W0pxlpZlb7GHuNIEyUlPCO_sfdwliADCORZuqJ2QP1SNrRmRdaLx71nSN1_DS2yzSTmSY-935czj0vupvxE3qHA689OtpGjng54No6kvbmYmhe03Kk2dGyusqg33rPbGdxmMzLv-5Z_XGoy6-g-v48lkUV4PpNEEHWYdJhbnKjbASt0AnkscJItK3IhLASFRjQLVqIMYMElDSZSuNUWCOV2LO3v7MP0Obq3QX9vdmAmwew-AV8qlID</recordid><startdate>20230523</startdate><enddate>20230523</enddate><creator>Kwon, Minchan</creator><creator>Kim, Kangil</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230523</creationdate><title>Enhancing Accuracy and Robustness through Adversarial Training in Class Incremental Continual Learning</title><author>Kwon, Minchan ; Kim, Kangil</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a678-107da4da9e9e8f10b3c40928a13bb3733f5a80e0cbaf02a704085e786263fe583</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Kwon, Minchan</creatorcontrib><creatorcontrib>Kim, Kangil</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Kwon, Minchan</au><au>Kim, Kangil</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Enhancing Accuracy and Robustness through Adversarial Training in Class Incremental Continual Learning</atitle><date>2023-05-23</date><risdate>2023</risdate><abstract>In real life, adversarial attack to deep learning models is a fatal security issue. However, the issue has been rarely discussed in a widely used class-incremental continual learning (CICL). In this paper, we address problems of applying adversarial training to CICL, which is well-known defense method against adversarial attack. A well-known problem of CICL is class-imbalance that biases a model to the current task by a few samples of previous tasks. Meeting with the adversarial training, the imbalance causes another imbalance of attack trials over tasks. Lacking clean data of a minority class by the class-imbalance and increasing of attack trials from a majority class by the secondary imbalance, adversarial training distorts optimal decision boundaries. The distortion eventually decreases both accuracy and robustness than adversarial training. To exclude the effects, we propose a straightforward but significantly effective method, External Adversarial Training (EAT) which can be applied to methods using experience replay. This method conduct adversarial training to an auxiliary external model for the current task data at each time step, and applies generated adversarial examples to train the target model. We verify the effects on a toy problem and show significance on CICL benchmarks of image classification. We expect that the results will be used as the first baseline for robustness research of CICL.</abstract><doi>10.48550/arxiv.2305.13678</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2305.13678
ispartof
issn
language eng
recordid cdi_arxiv_primary_2305_13678
source arXiv.org
subjects Computer Science - Learning
title Enhancing Accuracy and Robustness through Adversarial Training in Class Incremental Continual Learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-26T15%3A43%3A59IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Enhancing%20Accuracy%20and%20Robustness%20through%20Adversarial%20Training%20in%20Class%20Incremental%20Continual%20Learning&rft.au=Kwon,%20Minchan&rft.date=2023-05-23&rft_id=info:doi/10.48550/arxiv.2305.13678&rft_dat=%3Carxiv_GOX%3E2305_13678%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true