A comprehensive, application-oriented study of catastrophic forgetting in DNNs

We present a large-scale empirical study of catastrophic forgetting (CF) in modern Deep Neural Network (DNN) models that perform sequential (or: incremental) learning. A new experimental protocol is proposed that enforces typical constraints encountered in application scenarios. As the investigation...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2019-05
Hauptverfasser: Pfülb, B, Gepperth, A
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Pfülb, B
Gepperth, A
description We present a large-scale empirical study of catastrophic forgetting (CF) in modern Deep Neural Network (DNN) models that perform sequential (or: incremental) learning. A new experimental protocol is proposed that enforces typical constraints encountered in application scenarios. As the investigation is empirical, we evaluate CF behavior on the hitherto largest number of visual classification datasets, from each of which we construct a representative number of Sequential Learning Tasks (SLTs) in close alignment to previous works on CF. Our results clearly indicate that there is no model that avoids CF for all investigated datasets and SLTs under application conditions. We conclude with a discussion of potential solutions and workarounds to CF, notably for the EWC and IMM models.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2228410014</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2228410014</sourcerecordid><originalsourceid>FETCH-proquest_journals_22284100143</originalsourceid><addsrcrecordid>eNqNyr0OgjAUQOHGxESivMNNXCUpt6Csxp84MbkbAhcowbb2FhPfXgYfwOkM31mICJVKkyJDXImYeZBS4v6Aea4iUR6htk_nqSfD-k07qJwbdV0FbU1ivSYTqAEOU_MB28IMFQdvXa9raK3vKARtOtAGzmXJG7Fsq5Ep_nUtttfL_XRLnLeviTg8Bjt5M9MDEYsslTLN1H_XFxESPnk</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2228410014</pqid></control><display><type>article</type><title>A comprehensive, application-oriented study of catastrophic forgetting in DNNs</title><source>Free E- Journals</source><creator>Pfülb, B ; Gepperth, A</creator><creatorcontrib>Pfülb, B ; Gepperth, A</creatorcontrib><description>We present a large-scale empirical study of catastrophic forgetting (CF) in modern Deep Neural Network (DNN) models that perform sequential (or: incremental) learning. A new experimental protocol is proposed that enforces typical constraints encountered in application scenarios. As the investigation is empirical, we evaluate CF behavior on the hitherto largest number of visual classification datasets, from each of which we construct a representative number of Sequential Learning Tasks (SLTs) in close alignment to previous works on CF. Our results clearly indicate that there is no model that avoids CF for all investigated datasets and SLTs under application conditions. We conclude with a discussion of potential solutions and workarounds to CF, notably for the EWC and IMM models.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Datasets ; Learning ; Neural networks</subject><ispartof>arXiv.org, 2019-05</ispartof><rights>2019. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Pfülb, B</creatorcontrib><creatorcontrib>Gepperth, A</creatorcontrib><title>A comprehensive, application-oriented study of catastrophic forgetting in DNNs</title><title>arXiv.org</title><description>We present a large-scale empirical study of catastrophic forgetting (CF) in modern Deep Neural Network (DNN) models that perform sequential (or: incremental) learning. A new experimental protocol is proposed that enforces typical constraints encountered in application scenarios. As the investigation is empirical, we evaluate CF behavior on the hitherto largest number of visual classification datasets, from each of which we construct a representative number of Sequential Learning Tasks (SLTs) in close alignment to previous works on CF. Our results clearly indicate that there is no model that avoids CF for all investigated datasets and SLTs under application conditions. We conclude with a discussion of potential solutions and workarounds to CF, notably for the EWC and IMM models.</description><subject>Datasets</subject><subject>Learning</subject><subject>Neural networks</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNyr0OgjAUQOHGxESivMNNXCUpt6Csxp84MbkbAhcowbb2FhPfXgYfwOkM31mICJVKkyJDXImYeZBS4v6Aea4iUR6htk_nqSfD-k07qJwbdV0FbU1ivSYTqAEOU_MB28IMFQdvXa9raK3vKARtOtAGzmXJG7Fsq5Ep_nUtttfL_XRLnLeviTg8Bjt5M9MDEYsslTLN1H_XFxESPnk</recordid><startdate>20190520</startdate><enddate>20190520</enddate><creator>Pfülb, B</creator><creator>Gepperth, A</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20190520</creationdate><title>A comprehensive, application-oriented study of catastrophic forgetting in DNNs</title><author>Pfülb, B ; Gepperth, A</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_22284100143</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Datasets</topic><topic>Learning</topic><topic>Neural networks</topic><toplevel>online_resources</toplevel><creatorcontrib>Pfülb, B</creatorcontrib><creatorcontrib>Gepperth, A</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Pfülb, B</au><au>Gepperth, A</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>A comprehensive, application-oriented study of catastrophic forgetting in DNNs</atitle><jtitle>arXiv.org</jtitle><date>2019-05-20</date><risdate>2019</risdate><eissn>2331-8422</eissn><abstract>We present a large-scale empirical study of catastrophic forgetting (CF) in modern Deep Neural Network (DNN) models that perform sequential (or: incremental) learning. A new experimental protocol is proposed that enforces typical constraints encountered in application scenarios. As the investigation is empirical, we evaluate CF behavior on the hitherto largest number of visual classification datasets, from each of which we construct a representative number of Sequential Learning Tasks (SLTs) in close alignment to previous works on CF. Our results clearly indicate that there is no model that avoids CF for all investigated datasets and SLTs under application conditions. We conclude with a discussion of potential solutions and workarounds to CF, notably for the EWC and IMM models.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2019-05
issn 2331-8422
language eng
recordid cdi_proquest_journals_2228410014
source Free E- Journals
subjects Datasets
Learning
Neural networks
title A comprehensive, application-oriented study of catastrophic forgetting in DNNs
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-12T12%3A48%3A42IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=A%20comprehensive,%20application-oriented%20study%20of%20catastrophic%20forgetting%20in%20DNNs&rft.jtitle=arXiv.org&rft.au=Pf%C3%BClb,%20B&rft.date=2019-05-20&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2228410014%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2228410014&rft_id=info:pmid/&rfr_iscdi=true