Representation Compensation Networks for Continual Semantic Segmentation

In this work, we study the continual semantic segmentation problem, where the deep neural networks are required to incorporate new classes continually without catastrophic forgetting. We propose to use a structural re-parameterization mechanism, named representation compensation (RC) module, to deco...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Zhang, Chang-Bin, Xiao, Jia-Wen, Liu, Xialei, Chen, Ying-Cong, Cheng, Ming-Ming
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Zhang, Chang-Bin
Xiao, Jia-Wen
Liu, Xialei
Chen, Ying-Cong
Cheng, Ming-Ming
description In this work, we study the continual semantic segmentation problem, where the deep neural networks are required to incorporate new classes continually without catastrophic forgetting. We propose to use a structural re-parameterization mechanism, named representation compensation (RC) module, to decouple the representation learning of both old and new knowledge. The RC module consists of two dynamically evolved branches with one frozen and one trainable. Besides, we design a pooled cube knowledge distillation strategy on both spatial and channel dimensions to further enhance the plasticity and stability of the model. We conduct experiments on two challenging continual semantic segmentation scenarios, continual class segmentation and continual domain segmentation. Without any extra computational overhead and parameters during inference, our method outperforms state-of-the-art performance. The code is available at \url{https://github.com/zhangchbin/RCIL}.
doi_str_mv 10.48550/arxiv.2203.05402
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2203_05402</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2203_05402</sourcerecordid><originalsourceid>FETCH-LOGICAL-a672-863fd4aa1ea17dc2098aaa3158a54c6a16d998e4468dcd831fbed52d414e06b43</originalsourceid><addsrcrecordid>eNo1j81uwjAQhH3hUEEfoCfyAgn-WRvniCIKlRBIwD1a4g2KIE7kpLR9-wZoTzOj0Yz0MfYmeAJWaz7D8F3dEim5SrgGLl_Yek9toI58j33V-Chr6pZ89wxb6r-acOmisglD4_vKf-I1OlCNgy8Gc67_lxM2KvHa0eufjtnxfXnM1vFmt_rIFpsYzVzG1qjSAaIgFHNXSJ5aRFRCW9RQGBTGpaklAGNd4awS5Ymclg4EEDcnUGM2fd4-UPI2VDWGn_yOlD-Q1C951UgE</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Representation Compensation Networks for Continual Semantic Segmentation</title><source>arXiv.org</source><creator>Zhang, Chang-Bin ; Xiao, Jia-Wen ; Liu, Xialei ; Chen, Ying-Cong ; Cheng, Ming-Ming</creator><creatorcontrib>Zhang, Chang-Bin ; Xiao, Jia-Wen ; Liu, Xialei ; Chen, Ying-Cong ; Cheng, Ming-Ming</creatorcontrib><description>In this work, we study the continual semantic segmentation problem, where the deep neural networks are required to incorporate new classes continually without catastrophic forgetting. We propose to use a structural re-parameterization mechanism, named representation compensation (RC) module, to decouple the representation learning of both old and new knowledge. The RC module consists of two dynamically evolved branches with one frozen and one trainable. Besides, we design a pooled cube knowledge distillation strategy on both spatial and channel dimensions to further enhance the plasticity and stability of the model. We conduct experiments on two challenging continual semantic segmentation scenarios, continual class segmentation and continual domain segmentation. Without any extra computational overhead and parameters during inference, our method outperforms state-of-the-art performance. The code is available at \url{https://github.com/zhangchbin/RCIL}.</description><identifier>DOI: 10.48550/arxiv.2203.05402</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2022-03</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2203.05402$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2203.05402$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Zhang, Chang-Bin</creatorcontrib><creatorcontrib>Xiao, Jia-Wen</creatorcontrib><creatorcontrib>Liu, Xialei</creatorcontrib><creatorcontrib>Chen, Ying-Cong</creatorcontrib><creatorcontrib>Cheng, Ming-Ming</creatorcontrib><title>Representation Compensation Networks for Continual Semantic Segmentation</title><description>In this work, we study the continual semantic segmentation problem, where the deep neural networks are required to incorporate new classes continually without catastrophic forgetting. We propose to use a structural re-parameterization mechanism, named representation compensation (RC) module, to decouple the representation learning of both old and new knowledge. The RC module consists of two dynamically evolved branches with one frozen and one trainable. Besides, we design a pooled cube knowledge distillation strategy on both spatial and channel dimensions to further enhance the plasticity and stability of the model. We conduct experiments on two challenging continual semantic segmentation scenarios, continual class segmentation and continual domain segmentation. Without any extra computational overhead and parameters during inference, our method outperforms state-of-the-art performance. The code is available at \url{https://github.com/zhangchbin/RCIL}.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNo1j81uwjAQhH3hUEEfoCfyAgn-WRvniCIKlRBIwD1a4g2KIE7kpLR9-wZoTzOj0Yz0MfYmeAJWaz7D8F3dEim5SrgGLl_Yek9toI58j33V-Chr6pZ89wxb6r-acOmisglD4_vKf-I1OlCNgy8Gc67_lxM2KvHa0eufjtnxfXnM1vFmt_rIFpsYzVzG1qjSAaIgFHNXSJ5aRFRCW9RQGBTGpaklAGNd4awS5Ymclg4EEDcnUGM2fd4-UPI2VDWGn_yOlD-Q1C951UgE</recordid><startdate>20220310</startdate><enddate>20220310</enddate><creator>Zhang, Chang-Bin</creator><creator>Xiao, Jia-Wen</creator><creator>Liu, Xialei</creator><creator>Chen, Ying-Cong</creator><creator>Cheng, Ming-Ming</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220310</creationdate><title>Representation Compensation Networks for Continual Semantic Segmentation</title><author>Zhang, Chang-Bin ; Xiao, Jia-Wen ; Liu, Xialei ; Chen, Ying-Cong ; Cheng, Ming-Ming</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a672-863fd4aa1ea17dc2098aaa3158a54c6a16d998e4468dcd831fbed52d414e06b43</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Chang-Bin</creatorcontrib><creatorcontrib>Xiao, Jia-Wen</creatorcontrib><creatorcontrib>Liu, Xialei</creatorcontrib><creatorcontrib>Chen, Ying-Cong</creatorcontrib><creatorcontrib>Cheng, Ming-Ming</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhang, Chang-Bin</au><au>Xiao, Jia-Wen</au><au>Liu, Xialei</au><au>Chen, Ying-Cong</au><au>Cheng, Ming-Ming</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Representation Compensation Networks for Continual Semantic Segmentation</atitle><date>2022-03-10</date><risdate>2022</risdate><abstract>In this work, we study the continual semantic segmentation problem, where the deep neural networks are required to incorporate new classes continually without catastrophic forgetting. We propose to use a structural re-parameterization mechanism, named representation compensation (RC) module, to decouple the representation learning of both old and new knowledge. The RC module consists of two dynamically evolved branches with one frozen and one trainable. Besides, we design a pooled cube knowledge distillation strategy on both spatial and channel dimensions to further enhance the plasticity and stability of the model. We conduct experiments on two challenging continual semantic segmentation scenarios, continual class segmentation and continual domain segmentation. Without any extra computational overhead and parameters during inference, our method outperforms state-of-the-art performance. The code is available at \url{https://github.com/zhangchbin/RCIL}.</abstract><doi>10.48550/arxiv.2203.05402</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2203.05402
ispartof
issn
language eng
recordid cdi_arxiv_primary_2203_05402
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title Representation Compensation Networks for Continual Semantic Segmentation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-06T15%3A54%3A11IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Representation%20Compensation%20Networks%20for%20Continual%20Semantic%20Segmentation&rft.au=Zhang,%20Chang-Bin&rft.date=2022-03-10&rft_id=info:doi/10.48550/arxiv.2203.05402&rft_dat=%3Carxiv_GOX%3E2203_05402%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true