CIT: Rethinking Class-incremental Semantic Segmentation with a Class Independent Transformation

Class-incremental semantic segmentation (CSS) requires that a model learn to segment new classes without forgetting how to segment previous ones: this is typically achieved by distilling the current knowledge and incorporating the latest data. However, bypassing iterative distillation by directly tr...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-11
Hauptverfasser: Ge, Jinchao, Bowen, Zhang, Liu, Akide, Phan, Minh Hieu, Chen, Qi, Shu, Yangyang, Zhao, Yang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Ge, Jinchao
Bowen, Zhang
Liu, Akide
Phan, Minh Hieu
Chen, Qi
Shu, Yangyang
Zhao, Yang
description Class-incremental semantic segmentation (CSS) requires that a model learn to segment new classes without forgetting how to segment previous ones: this is typically achieved by distilling the current knowledge and incorporating the latest data. However, bypassing iterative distillation by directly transferring outputs of initial classes to the current learning task is not supported in existing class-specific CSS methods. Via Softmax, they enforce dependency between classes and adjust the output distribution at each learning step, resulting in a large probability distribution gap between initial and current tasks. We introduce a simple, yet effective Class Independent Transformation (CIT) that converts the outputs of existing semantic segmentation models into class-independent forms with negligible cost or performance loss. By utilizing class-independent predictions facilitated by CIT, we establish an accumulative distillation framework, ensuring equitable incorporation of all class information. We conduct extensive experiments on various segmentation architectures, including DeepLabV3, Mask2Former, and SegViTv2. Results from these experiments show minimal task forgetting across different datasets, with less than 5% for ADE20K in the most challenging 11 task configurations and less than 1% across all configurations for the PASCAL VOC 2012 dataset.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3124848777</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3124848777</sourcerecordid><originalsourceid>FETCH-proquest_journals_31248487773</originalsourceid><addsrcrecordid>eNqNitEKgjAYhUcQJOU7DLoWdNMm3UqRt-W9DP3Vmf6zbdLrJ9YDdHPO4TvfhniM8yhIY8Z2xLe2D8OQnQRLEu6RMsuLM72D6xQ-FbY0G6S1gcLKwAjo5EAfMEp0qlpGuyKnNNK3ch2VX53mWMMES6CjhZFoG23G1TuQbSMHC_6v9-R4vRTZLZiMfs1gXdnr2eBylTxicRqnQgj-n_UBl0JExg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3124848777</pqid></control><display><type>article</type><title>CIT: Rethinking Class-incremental Semantic Segmentation with a Class Independent Transformation</title><source>Free E- Journals</source><creator>Ge, Jinchao ; Bowen, Zhang ; Liu, Akide ; Phan, Minh Hieu ; Chen, Qi ; Shu, Yangyang ; Zhao, Yang</creator><creatorcontrib>Ge, Jinchao ; Bowen, Zhang ; Liu, Akide ; Phan, Minh Hieu ; Chen, Qi ; Shu, Yangyang ; Zhao, Yang</creatorcontrib><description>Class-incremental semantic segmentation (CSS) requires that a model learn to segment new classes without forgetting how to segment previous ones: this is typically achieved by distilling the current knowledge and incorporating the latest data. However, bypassing iterative distillation by directly transferring outputs of initial classes to the current learning task is not supported in existing class-specific CSS methods. Via Softmax, they enforce dependency between classes and adjust the output distribution at each learning step, resulting in a large probability distribution gap between initial and current tasks. We introduce a simple, yet effective Class Independent Transformation (CIT) that converts the outputs of existing semantic segmentation models into class-independent forms with negligible cost or performance loss. By utilizing class-independent predictions facilitated by CIT, we establish an accumulative distillation framework, ensuring equitable incorporation of all class information. We conduct extensive experiments on various segmentation architectures, including DeepLabV3, Mask2Former, and SegViTv2. Results from these experiments show minimal task forgetting across different datasets, with less than 5% for ADE20K in the most challenging 11 task configurations and less than 1% across all configurations for the PASCAL VOC 2012 dataset.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Cognitive tasks ; Configurations ; Datasets ; Learning ; Segments ; Semantic segmentation ; Semantics</subject><ispartof>arXiv.org, 2024-11</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Ge, Jinchao</creatorcontrib><creatorcontrib>Bowen, Zhang</creatorcontrib><creatorcontrib>Liu, Akide</creatorcontrib><creatorcontrib>Phan, Minh Hieu</creatorcontrib><creatorcontrib>Chen, Qi</creatorcontrib><creatorcontrib>Shu, Yangyang</creatorcontrib><creatorcontrib>Zhao, Yang</creatorcontrib><title>CIT: Rethinking Class-incremental Semantic Segmentation with a Class Independent Transformation</title><title>arXiv.org</title><description>Class-incremental semantic segmentation (CSS) requires that a model learn to segment new classes without forgetting how to segment previous ones: this is typically achieved by distilling the current knowledge and incorporating the latest data. However, bypassing iterative distillation by directly transferring outputs of initial classes to the current learning task is not supported in existing class-specific CSS methods. Via Softmax, they enforce dependency between classes and adjust the output distribution at each learning step, resulting in a large probability distribution gap between initial and current tasks. We introduce a simple, yet effective Class Independent Transformation (CIT) that converts the outputs of existing semantic segmentation models into class-independent forms with negligible cost or performance loss. By utilizing class-independent predictions facilitated by CIT, we establish an accumulative distillation framework, ensuring equitable incorporation of all class information. We conduct extensive experiments on various segmentation architectures, including DeepLabV3, Mask2Former, and SegViTv2. Results from these experiments show minimal task forgetting across different datasets, with less than 5% for ADE20K in the most challenging 11 task configurations and less than 1% across all configurations for the PASCAL VOC 2012 dataset.</description><subject>Cognitive tasks</subject><subject>Configurations</subject><subject>Datasets</subject><subject>Learning</subject><subject>Segments</subject><subject>Semantic segmentation</subject><subject>Semantics</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNitEKgjAYhUcQJOU7DLoWdNMm3UqRt-W9DP3Vmf6zbdLrJ9YDdHPO4TvfhniM8yhIY8Z2xLe2D8OQnQRLEu6RMsuLM72D6xQ-FbY0G6S1gcLKwAjo5EAfMEp0qlpGuyKnNNK3ch2VX53mWMMES6CjhZFoG23G1TuQbSMHC_6v9-R4vRTZLZiMfs1gXdnr2eBylTxicRqnQgj-n_UBl0JExg</recordid><startdate>20241105</startdate><enddate>20241105</enddate><creator>Ge, Jinchao</creator><creator>Bowen, Zhang</creator><creator>Liu, Akide</creator><creator>Phan, Minh Hieu</creator><creator>Chen, Qi</creator><creator>Shu, Yangyang</creator><creator>Zhao, Yang</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20241105</creationdate><title>CIT: Rethinking Class-incremental Semantic Segmentation with a Class Independent Transformation</title><author>Ge, Jinchao ; Bowen, Zhang ; Liu, Akide ; Phan, Minh Hieu ; Chen, Qi ; Shu, Yangyang ; Zhao, Yang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_31248487773</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Cognitive tasks</topic><topic>Configurations</topic><topic>Datasets</topic><topic>Learning</topic><topic>Segments</topic><topic>Semantic segmentation</topic><topic>Semantics</topic><toplevel>online_resources</toplevel><creatorcontrib>Ge, Jinchao</creatorcontrib><creatorcontrib>Bowen, Zhang</creatorcontrib><creatorcontrib>Liu, Akide</creatorcontrib><creatorcontrib>Phan, Minh Hieu</creatorcontrib><creatorcontrib>Chen, Qi</creatorcontrib><creatorcontrib>Shu, Yangyang</creatorcontrib><creatorcontrib>Zhao, Yang</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection (ProQuest)</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Ge, Jinchao</au><au>Bowen, Zhang</au><au>Liu, Akide</au><au>Phan, Minh Hieu</au><au>Chen, Qi</au><au>Shu, Yangyang</au><au>Zhao, Yang</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>CIT: Rethinking Class-incremental Semantic Segmentation with a Class Independent Transformation</atitle><jtitle>arXiv.org</jtitle><date>2024-11-05</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Class-incremental semantic segmentation (CSS) requires that a model learn to segment new classes without forgetting how to segment previous ones: this is typically achieved by distilling the current knowledge and incorporating the latest data. However, bypassing iterative distillation by directly transferring outputs of initial classes to the current learning task is not supported in existing class-specific CSS methods. Via Softmax, they enforce dependency between classes and adjust the output distribution at each learning step, resulting in a large probability distribution gap between initial and current tasks. We introduce a simple, yet effective Class Independent Transformation (CIT) that converts the outputs of existing semantic segmentation models into class-independent forms with negligible cost or performance loss. By utilizing class-independent predictions facilitated by CIT, we establish an accumulative distillation framework, ensuring equitable incorporation of all class information. We conduct extensive experiments on various segmentation architectures, including DeepLabV3, Mask2Former, and SegViTv2. Results from these experiments show minimal task forgetting across different datasets, with less than 5% for ADE20K in the most challenging 11 task configurations and less than 1% across all configurations for the PASCAL VOC 2012 dataset.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-11
issn 2331-8422
language eng
recordid cdi_proquest_journals_3124848777
source Free E- Journals
subjects Cognitive tasks
Configurations
Datasets
Learning
Segments
Semantic segmentation
Semantics
title CIT: Rethinking Class-incremental Semantic Segmentation with a Class Independent Transformation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-14T11%3A12%3A08IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=CIT:%20Rethinking%20Class-incremental%20Semantic%20Segmentation%20with%20a%20Class%20Independent%20Transformation&rft.jtitle=arXiv.org&rft.au=Ge,%20Jinchao&rft.date=2024-11-05&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3124848777%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3124848777&rft_id=info:pmid/&rfr_iscdi=true