Exploiting Shape Cues for Weakly Supervised Semantic Segmentation
Weakly supervised semantic segmentation (WSSS) aims to produce pixel-wise class predictions with only image-level labels for training. To this end, previous methods adopt the common pipeline: they generate pseudo masks from class activation maps (CAMs) and use such masks to supervise segmentation ne...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2022-08 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Kho, Sungpil Lee, Pilhyeon Lee, Wonyoung Minsong Ki Byun, Hyeran |
description | Weakly supervised semantic segmentation (WSSS) aims to produce pixel-wise class predictions with only image-level labels for training. To this end, previous methods adopt the common pipeline: they generate pseudo masks from class activation maps (CAMs) and use such masks to supervise segmentation networks. However, it is challenging to derive comprehensive pseudo masks that cover the whole extent of objects due to the local property of CAMs, i.e., they tend to focus solely on small discriminative object parts. In this paper, we associate the locality of CAMs with the texture-biased property of convolutional neural networks (CNNs). Accordingly, we propose to exploit shape information to supplement the texture-biased CNN features, thereby encouraging mask predictions to be not only comprehensive but also well-aligned with object boundaries. We further refine the predictions in an online fashion with a novel refinement method that takes into account both the class and the color affinities, in order to generate reliable pseudo masks to supervise the model. Importantly, our model is end-to-end trained within a single-stage framework and therefore efficient in terms of the training cost. Through extensive experiments on PASCAL VOC 2012, we validate the effectiveness of our method in producing precise and shape-aligned segmentation results. Specifically, our model surpasses the existing state-of-the-art single-stage approaches by large margins. What is more, it also achieves a new state-of-the-art performance over multi-stage approaches, when adopted in a simple two-stage pipeline without bells and whistles. |
doi_str_mv | 10.48550/arxiv.2208.04286 |
format | Article |
fullrecord | <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2208_04286</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2700155601</sourcerecordid><originalsourceid>FETCH-LOGICAL-a521-b7b9df52e320195399f8c8b0235f6155f1cd2a66c507aade169d57d4d56da1703</originalsourceid><addsrcrecordid>eNotj01rg0AYhJdCoSHND-ipQs_ad9_1XfUYJP2AQA8GepTVXdNN_eqqIfn3tUlPM4dhZh7GHjgEYUwEz8qd7DFAhDiAEGN5wxYoBPfjEPGOrYbhAAAoIyQSC7benPq6s6Nt9172pXrjpZMZvKpz3qdR3_XZy6beuKMdjPYy06h2tOVs9o1pRzXarr1nt5WqB7P61yXbvWx26Zu__Xh9T9dbXxFyv4iKRFeERiDwhESSVHEZF4CCKsmJKl5qVFKWBJFS2nCZaIp0qElqxSMQS_Z4rb3w5b2zjXLn_I8zv3DOiadronfdzwwx5oducu38KccIYB6RwMUvThpU8Q</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2700155601</pqid></control><display><type>article</type><title>Exploiting Shape Cues for Weakly Supervised Semantic Segmentation</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Kho, Sungpil ; Lee, Pilhyeon ; Lee, Wonyoung ; Minsong Ki ; Byun, Hyeran</creator><creatorcontrib>Kho, Sungpil ; Lee, Pilhyeon ; Lee, Wonyoung ; Minsong Ki ; Byun, Hyeran</creatorcontrib><description>Weakly supervised semantic segmentation (WSSS) aims to produce pixel-wise class predictions with only image-level labels for training. To this end, previous methods adopt the common pipeline: they generate pseudo masks from class activation maps (CAMs) and use such masks to supervise segmentation networks. However, it is challenging to derive comprehensive pseudo masks that cover the whole extent of objects due to the local property of CAMs, i.e., they tend to focus solely on small discriminative object parts. In this paper, we associate the locality of CAMs with the texture-biased property of convolutional neural networks (CNNs). Accordingly, we propose to exploit shape information to supplement the texture-biased CNN features, thereby encouraging mask predictions to be not only comprehensive but also well-aligned with object boundaries. We further refine the predictions in an online fashion with a novel refinement method that takes into account both the class and the color affinities, in order to generate reliable pseudo masks to supervise the model. Importantly, our model is end-to-end trained within a single-stage framework and therefore efficient in terms of the training cost. Through extensive experiments on PASCAL VOC 2012, we validate the effectiveness of our method in producing precise and shape-aligned segmentation results. Specifically, our model surpasses the existing state-of-the-art single-stage approaches by large margins. What is more, it also achieves a new state-of-the-art performance over multi-stage approaches, when adopted in a simple two-stage pipeline without bells and whistles.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2208.04286</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Artificial neural networks ; Bells ; Computer Science - Artificial Intelligence ; Computer Science - Computer Vision and Pattern Recognition ; Image segmentation ; Masks ; Semantic segmentation ; Semantics ; Texture ; Training</subject><ispartof>arXiv.org, 2022-08</ispartof><rights>2022. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,784,885,27925</link.rule.ids><backlink>$$Uhttps://doi.org/10.1016/j.patcog.2022.108953$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.48550/arXiv.2208.04286$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Kho, Sungpil</creatorcontrib><creatorcontrib>Lee, Pilhyeon</creatorcontrib><creatorcontrib>Lee, Wonyoung</creatorcontrib><creatorcontrib>Minsong Ki</creatorcontrib><creatorcontrib>Byun, Hyeran</creatorcontrib><title>Exploiting Shape Cues for Weakly Supervised Semantic Segmentation</title><title>arXiv.org</title><description>Weakly supervised semantic segmentation (WSSS) aims to produce pixel-wise class predictions with only image-level labels for training. To this end, previous methods adopt the common pipeline: they generate pseudo masks from class activation maps (CAMs) and use such masks to supervise segmentation networks. However, it is challenging to derive comprehensive pseudo masks that cover the whole extent of objects due to the local property of CAMs, i.e., they tend to focus solely on small discriminative object parts. In this paper, we associate the locality of CAMs with the texture-biased property of convolutional neural networks (CNNs). Accordingly, we propose to exploit shape information to supplement the texture-biased CNN features, thereby encouraging mask predictions to be not only comprehensive but also well-aligned with object boundaries. We further refine the predictions in an online fashion with a novel refinement method that takes into account both the class and the color affinities, in order to generate reliable pseudo masks to supervise the model. Importantly, our model is end-to-end trained within a single-stage framework and therefore efficient in terms of the training cost. Through extensive experiments on PASCAL VOC 2012, we validate the effectiveness of our method in producing precise and shape-aligned segmentation results. Specifically, our model surpasses the existing state-of-the-art single-stage approaches by large margins. What is more, it also achieves a new state-of-the-art performance over multi-stage approaches, when adopted in a simple two-stage pipeline without bells and whistles.</description><subject>Artificial neural networks</subject><subject>Bells</subject><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Image segmentation</subject><subject>Masks</subject><subject>Semantic segmentation</subject><subject>Semantics</subject><subject>Texture</subject><subject>Training</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GOX</sourceid><recordid>eNotj01rg0AYhJdCoSHND-ipQs_ad9_1XfUYJP2AQA8GepTVXdNN_eqqIfn3tUlPM4dhZh7GHjgEYUwEz8qd7DFAhDiAEGN5wxYoBPfjEPGOrYbhAAAoIyQSC7benPq6s6Nt9172pXrjpZMZvKpz3qdR3_XZy6beuKMdjPYy06h2tOVs9o1pRzXarr1nt5WqB7P61yXbvWx26Zu__Xh9T9dbXxFyv4iKRFeERiDwhESSVHEZF4CCKsmJKl5qVFKWBJFS2nCZaIp0qElqxSMQS_Z4rb3w5b2zjXLn_I8zv3DOiadronfdzwwx5oducu38KccIYB6RwMUvThpU8Q</recordid><startdate>20220808</startdate><enddate>20220808</enddate><creator>Kho, Sungpil</creator><creator>Lee, Pilhyeon</creator><creator>Lee, Wonyoung</creator><creator>Minsong Ki</creator><creator>Byun, Hyeran</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220808</creationdate><title>Exploiting Shape Cues for Weakly Supervised Semantic Segmentation</title><author>Kho, Sungpil ; Lee, Pilhyeon ; Lee, Wonyoung ; Minsong Ki ; Byun, Hyeran</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a521-b7b9df52e320195399f8c8b0235f6155f1cd2a66c507aade169d57d4d56da1703</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Artificial neural networks</topic><topic>Bells</topic><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Image segmentation</topic><topic>Masks</topic><topic>Semantic segmentation</topic><topic>Semantics</topic><topic>Texture</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>Kho, Sungpil</creatorcontrib><creatorcontrib>Lee, Pilhyeon</creatorcontrib><creatorcontrib>Lee, Wonyoung</creatorcontrib><creatorcontrib>Minsong Ki</creatorcontrib><creatorcontrib>Byun, Hyeran</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Access via ProQuest (Open Access)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Kho, Sungpil</au><au>Lee, Pilhyeon</au><au>Lee, Wonyoung</au><au>Minsong Ki</au><au>Byun, Hyeran</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Exploiting Shape Cues for Weakly Supervised Semantic Segmentation</atitle><jtitle>arXiv.org</jtitle><date>2022-08-08</date><risdate>2022</risdate><eissn>2331-8422</eissn><abstract>Weakly supervised semantic segmentation (WSSS) aims to produce pixel-wise class predictions with only image-level labels for training. To this end, previous methods adopt the common pipeline: they generate pseudo masks from class activation maps (CAMs) and use such masks to supervise segmentation networks. However, it is challenging to derive comprehensive pseudo masks that cover the whole extent of objects due to the local property of CAMs, i.e., they tend to focus solely on small discriminative object parts. In this paper, we associate the locality of CAMs with the texture-biased property of convolutional neural networks (CNNs). Accordingly, we propose to exploit shape information to supplement the texture-biased CNN features, thereby encouraging mask predictions to be not only comprehensive but also well-aligned with object boundaries. We further refine the predictions in an online fashion with a novel refinement method that takes into account both the class and the color affinities, in order to generate reliable pseudo masks to supervise the model. Importantly, our model is end-to-end trained within a single-stage framework and therefore efficient in terms of the training cost. Through extensive experiments on PASCAL VOC 2012, we validate the effectiveness of our method in producing precise and shape-aligned segmentation results. Specifically, our model surpasses the existing state-of-the-art single-stage approaches by large margins. What is more, it also achieves a new state-of-the-art performance over multi-stage approaches, when adopted in a simple two-stage pipeline without bells and whistles.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2208.04286</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2022-08 |
issn | 2331-8422 |
language | eng |
recordid | cdi_arxiv_primary_2208_04286 |
source | arXiv.org; Free E- Journals |
subjects | Artificial neural networks Bells Computer Science - Artificial Intelligence Computer Science - Computer Vision and Pattern Recognition Image segmentation Masks Semantic segmentation Semantics Texture Training |
title | Exploiting Shape Cues for Weakly Supervised Semantic Segmentation |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-29T14%3A25%3A23IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Exploiting%20Shape%20Cues%20for%20Weakly%20Supervised%20Semantic%20Segmentation&rft.jtitle=arXiv.org&rft.au=Kho,%20Sungpil&rft.date=2022-08-08&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2208.04286&rft_dat=%3Cproquest_arxiv%3E2700155601%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2700155601&rft_id=info:pmid/&rfr_iscdi=true |