Sfnet: Faster and Accurate Semantic Segmentation Via Semantic Flow

In this paper, we focus on exploring effective methods for faster and accurate semantic segmentation. A common practice to improve the performance is to attain high-resolution feature maps with strong semantic representation. Two strategies are widely used: atrous convolutions and feature pyramid fu...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:International journal of computer vision 2024-02, Vol.132 (2), p.466-489
Hauptverfasser: Li, Xiangtai, Zhang, Jiangning, Yang, Yibo, Cheng, Guangliang, Yang, Kuiyuan, Tong, Yunhai, Tao, Dacheng
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 489
container_issue 2
container_start_page 466
container_title International journal of computer vision
container_volume 132
creator Li, Xiangtai
Zhang, Jiangning
Yang, Yibo
Cheng, Guangliang
Yang, Kuiyuan
Tong, Yunhai
Tao, Dacheng
description In this paper, we focus on exploring effective methods for faster and accurate semantic segmentation. A common practice to improve the performance is to attain high-resolution feature maps with strong semantic representation. Two strategies are widely used: atrous convolutions and feature pyramid fusion, while both are either computationally intensive or ineffective. Inspired by the Optical Flow for motion alignment between adjacent video frames, we propose a Flow Alignment Module (FAM) to learn Semantic Flow between feature maps of adjacent levels and broadcast high-level features to high-resolution features effectively and efficiently. Furthermore, integrating our FAM to a standard feature pyramid structure exhibits superior performance over other real-time methods, even on lightweight backbone networks, such as ResNet-18 and DFNet. Then to further speed up the inference procedure, we also present a novel Gated Dual Flow Alignment Module to directly align high-resolution feature maps and low-resolution feature maps where we term the improved version network as SFNet-Lite. Extensive experiments are conducted on several challenging datasets, where results show the effectiveness of both SFNet and SFNet-Lite. In particular, when using Cityscapes test set, the SFNet-Lite series achieve 80.1 mIoU while running at 60 FPS using ResNet-18 backbone and 78.8 mIoU while running at 120 FPS using STDC backbone on RTX-3090. Moreover, we unify four challenging driving datasets (i.e., Cityscapes, Mapillary, IDD, and BDD) into one large dataset, which we named Unified Driving Segmentation (UDS) dataset. It contains diverse domain and style information. We benchmark several representative works on UDS. Both SFNet and SFNet-Lite still achieve the best speed and accuracy trade-off on UDS, which serves as a strong baseline in such a challenging setting. The code and models are publicly available at https://github.com/lxtGH/SFSegNets .
doi_str_mv 10.1007/s11263-023-01875-x
format Article
fullrecord <record><control><sourceid>gale_proqu</sourceid><recordid>TN_cdi_proquest_journals_2921179534</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A781009711</galeid><sourcerecordid>A781009711</sourcerecordid><originalsourceid>FETCH-LOGICAL-c436t-fce068e4a2aff00b7c820acbb0607b6be15cb2cb2516f5eaf12cb247ce23ffb63</originalsourceid><addsrcrecordid>eNp9kU1LwzAYx4MoOKdfwFPBk4fOJ-lLWm9zOB0MBKdeQ5o9KR1rOpMM57c3s8LYRZKQt98vCfkTck1hRAH4naOU5UkMLDRa8CzenZABzXgS0xSyUzKAkkGc5SU9JxfOrQCAFSwZkIeFNujvo6l0Hm0kzTIaK7W10mO0wFYa36gwqFs0XvqmM9FHIw8703X3dUnOtFw7vPrrh-R9-vg2eY7nL0-zyXgeqzTJfawVQl5gKpnUGqDiqmAgVVVBDrzKK6SZqlioGc11hlLT_STlClmidZUnQ3LTn7ux3ecWnRerbmtNuFKwklHKyyxJAzXqqVquUTRGd95KFcoS20Z1BnUT1se8CP9WckqDcHskBMbjztdy65yYLV6PWdazynbOWdRiY5tW2m9BQeyDEH0QIgQhfoMQuyAlveQCbGq0h3f_Y_0AEzaKfg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2921179534</pqid></control><display><type>article</type><title>Sfnet: Faster and Accurate Semantic Segmentation Via Semantic Flow</title><source>Springer Nature - Complete Springer Journals</source><creator>Li, Xiangtai ; Zhang, Jiangning ; Yang, Yibo ; Cheng, Guangliang ; Yang, Kuiyuan ; Tong, Yunhai ; Tao, Dacheng</creator><creatorcontrib>Li, Xiangtai ; Zhang, Jiangning ; Yang, Yibo ; Cheng, Guangliang ; Yang, Kuiyuan ; Tong, Yunhai ; Tao, Dacheng</creatorcontrib><description>In this paper, we focus on exploring effective methods for faster and accurate semantic segmentation. A common practice to improve the performance is to attain high-resolution feature maps with strong semantic representation. Two strategies are widely used: atrous convolutions and feature pyramid fusion, while both are either computationally intensive or ineffective. Inspired by the Optical Flow for motion alignment between adjacent video frames, we propose a Flow Alignment Module (FAM) to learn Semantic Flow between feature maps of adjacent levels and broadcast high-level features to high-resolution features effectively and efficiently. Furthermore, integrating our FAM to a standard feature pyramid structure exhibits superior performance over other real-time methods, even on lightweight backbone networks, such as ResNet-18 and DFNet. Then to further speed up the inference procedure, we also present a novel Gated Dual Flow Alignment Module to directly align high-resolution feature maps and low-resolution feature maps where we term the improved version network as SFNet-Lite. Extensive experiments are conducted on several challenging datasets, where results show the effectiveness of both SFNet and SFNet-Lite. In particular, when using Cityscapes test set, the SFNet-Lite series achieve 80.1 mIoU while running at 60 FPS using ResNet-18 backbone and 78.8 mIoU while running at 120 FPS using STDC backbone on RTX-3090. Moreover, we unify four challenging driving datasets (i.e., Cityscapes, Mapillary, IDD, and BDD) into one large dataset, which we named Unified Driving Segmentation (UDS) dataset. It contains diverse domain and style information. We benchmark several representative works on UDS. Both SFNet and SFNet-Lite still achieve the best speed and accuracy trade-off on UDS, which serves as a strong baseline in such a challenging setting. The code and models are publicly available at https://github.com/lxtGH/SFSegNets .</description><identifier>ISSN: 0920-5691</identifier><identifier>EISSN: 1573-1405</identifier><identifier>DOI: 10.1007/s11263-023-01875-x</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Alignment ; Artificial Intelligence ; Computer Imaging ; Computer networks ; Computer Science ; Datasets ; Feature maps ; Flow mapping ; High resolution ; Image Processing and Computer Vision ; Modules ; Optical flow (image analysis) ; Pattern Recognition ; Pattern Recognition and Graphics ; Performance enhancement ; Pyramids ; Semantic segmentation ; Semantics ; Vision</subject><ispartof>International journal of computer vision, 2024-02, Vol.132 (2), p.466-489</ispartof><rights>The Author(s) 2023</rights><rights>COPYRIGHT 2024 Springer</rights><rights>The Author(s) 2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c436t-fce068e4a2aff00b7c820acbb0607b6be15cb2cb2516f5eaf12cb247ce23ffb63</citedby><cites>FETCH-LOGICAL-c436t-fce068e4a2aff00b7c820acbb0607b6be15cb2cb2516f5eaf12cb247ce23ffb63</cites><orcidid>0000-0001-8735-2516</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11263-023-01875-x$$EPDF$$P50$$Gspringer$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11263-023-01875-x$$EHTML$$P50$$Gspringer$$Hfree_for_read</linktohtml><link.rule.ids>314,776,780,27901,27902,41464,42533,51294</link.rule.ids></links><search><creatorcontrib>Li, Xiangtai</creatorcontrib><creatorcontrib>Zhang, Jiangning</creatorcontrib><creatorcontrib>Yang, Yibo</creatorcontrib><creatorcontrib>Cheng, Guangliang</creatorcontrib><creatorcontrib>Yang, Kuiyuan</creatorcontrib><creatorcontrib>Tong, Yunhai</creatorcontrib><creatorcontrib>Tao, Dacheng</creatorcontrib><title>Sfnet: Faster and Accurate Semantic Segmentation Via Semantic Flow</title><title>International journal of computer vision</title><addtitle>Int J Comput Vis</addtitle><description>In this paper, we focus on exploring effective methods for faster and accurate semantic segmentation. A common practice to improve the performance is to attain high-resolution feature maps with strong semantic representation. Two strategies are widely used: atrous convolutions and feature pyramid fusion, while both are either computationally intensive or ineffective. Inspired by the Optical Flow for motion alignment between adjacent video frames, we propose a Flow Alignment Module (FAM) to learn Semantic Flow between feature maps of adjacent levels and broadcast high-level features to high-resolution features effectively and efficiently. Furthermore, integrating our FAM to a standard feature pyramid structure exhibits superior performance over other real-time methods, even on lightweight backbone networks, such as ResNet-18 and DFNet. Then to further speed up the inference procedure, we also present a novel Gated Dual Flow Alignment Module to directly align high-resolution feature maps and low-resolution feature maps where we term the improved version network as SFNet-Lite. Extensive experiments are conducted on several challenging datasets, where results show the effectiveness of both SFNet and SFNet-Lite. In particular, when using Cityscapes test set, the SFNet-Lite series achieve 80.1 mIoU while running at 60 FPS using ResNet-18 backbone and 78.8 mIoU while running at 120 FPS using STDC backbone on RTX-3090. Moreover, we unify four challenging driving datasets (i.e., Cityscapes, Mapillary, IDD, and BDD) into one large dataset, which we named Unified Driving Segmentation (UDS) dataset. It contains diverse domain and style information. We benchmark several representative works on UDS. Both SFNet and SFNet-Lite still achieve the best speed and accuracy trade-off on UDS, which serves as a strong baseline in such a challenging setting. The code and models are publicly available at https://github.com/lxtGH/SFSegNets .</description><subject>Alignment</subject><subject>Artificial Intelligence</subject><subject>Computer Imaging</subject><subject>Computer networks</subject><subject>Computer Science</subject><subject>Datasets</subject><subject>Feature maps</subject><subject>Flow mapping</subject><subject>High resolution</subject><subject>Image Processing and Computer Vision</subject><subject>Modules</subject><subject>Optical flow (image analysis)</subject><subject>Pattern Recognition</subject><subject>Pattern Recognition and Graphics</subject><subject>Performance enhancement</subject><subject>Pyramids</subject><subject>Semantic segmentation</subject><subject>Semantics</subject><subject>Vision</subject><issn>0920-5691</issn><issn>1573-1405</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>C6C</sourceid><sourceid>BENPR</sourceid><recordid>eNp9kU1LwzAYx4MoOKdfwFPBk4fOJ-lLWm9zOB0MBKdeQ5o9KR1rOpMM57c3s8LYRZKQt98vCfkTck1hRAH4naOU5UkMLDRa8CzenZABzXgS0xSyUzKAkkGc5SU9JxfOrQCAFSwZkIeFNujvo6l0Hm0kzTIaK7W10mO0wFYa36gwqFs0XvqmM9FHIw8703X3dUnOtFw7vPrrh-R9-vg2eY7nL0-zyXgeqzTJfawVQl5gKpnUGqDiqmAgVVVBDrzKK6SZqlioGc11hlLT_STlClmidZUnQ3LTn7ux3ecWnRerbmtNuFKwklHKyyxJAzXqqVquUTRGd95KFcoS20Z1BnUT1se8CP9WckqDcHskBMbjztdy65yYLV6PWdazynbOWdRiY5tW2m9BQeyDEH0QIgQhfoMQuyAlveQCbGq0h3f_Y_0AEzaKfg</recordid><startdate>20240201</startdate><enddate>20240201</enddate><creator>Li, Xiangtai</creator><creator>Zhang, Jiangning</creator><creator>Yang, Yibo</creator><creator>Cheng, Guangliang</creator><creator>Yang, Kuiyuan</creator><creator>Tong, Yunhai</creator><creator>Tao, Dacheng</creator><general>Springer US</general><general>Springer</general><general>Springer Nature B.V</general><scope>C6C</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>ISR</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PYYUZ</scope><scope>Q9U</scope><orcidid>https://orcid.org/0000-0001-8735-2516</orcidid></search><sort><creationdate>20240201</creationdate><title>Sfnet: Faster and Accurate Semantic Segmentation Via Semantic Flow</title><author>Li, Xiangtai ; Zhang, Jiangning ; Yang, Yibo ; Cheng, Guangliang ; Yang, Kuiyuan ; Tong, Yunhai ; Tao, Dacheng</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c436t-fce068e4a2aff00b7c820acbb0607b6be15cb2cb2516f5eaf12cb247ce23ffb63</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Alignment</topic><topic>Artificial Intelligence</topic><topic>Computer Imaging</topic><topic>Computer networks</topic><topic>Computer Science</topic><topic>Datasets</topic><topic>Feature maps</topic><topic>Flow mapping</topic><topic>High resolution</topic><topic>Image Processing and Computer Vision</topic><topic>Modules</topic><topic>Optical flow (image analysis)</topic><topic>Pattern Recognition</topic><topic>Pattern Recognition and Graphics</topic><topic>Performance enhancement</topic><topic>Pyramids</topic><topic>Semantic segmentation</topic><topic>Semantics</topic><topic>Vision</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Li, Xiangtai</creatorcontrib><creatorcontrib>Zhang, Jiangning</creatorcontrib><creatorcontrib>Yang, Yibo</creatorcontrib><creatorcontrib>Cheng, Guangliang</creatorcontrib><creatorcontrib>Yang, Kuiyuan</creatorcontrib><creatorcontrib>Tong, Yunhai</creatorcontrib><creatorcontrib>Tao, Dacheng</creatorcontrib><collection>Springer Nature OA Free Journals</collection><collection>CrossRef</collection><collection>Gale In Context: Science</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ABI/INFORM Collection China</collection><collection>ProQuest Central Basic</collection><jtitle>International journal of computer vision</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Li, Xiangtai</au><au>Zhang, Jiangning</au><au>Yang, Yibo</au><au>Cheng, Guangliang</au><au>Yang, Kuiyuan</au><au>Tong, Yunhai</au><au>Tao, Dacheng</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Sfnet: Faster and Accurate Semantic Segmentation Via Semantic Flow</atitle><jtitle>International journal of computer vision</jtitle><stitle>Int J Comput Vis</stitle><date>2024-02-01</date><risdate>2024</risdate><volume>132</volume><issue>2</issue><spage>466</spage><epage>489</epage><pages>466-489</pages><issn>0920-5691</issn><eissn>1573-1405</eissn><abstract>In this paper, we focus on exploring effective methods for faster and accurate semantic segmentation. A common practice to improve the performance is to attain high-resolution feature maps with strong semantic representation. Two strategies are widely used: atrous convolutions and feature pyramid fusion, while both are either computationally intensive or ineffective. Inspired by the Optical Flow for motion alignment between adjacent video frames, we propose a Flow Alignment Module (FAM) to learn Semantic Flow between feature maps of adjacent levels and broadcast high-level features to high-resolution features effectively and efficiently. Furthermore, integrating our FAM to a standard feature pyramid structure exhibits superior performance over other real-time methods, even on lightweight backbone networks, such as ResNet-18 and DFNet. Then to further speed up the inference procedure, we also present a novel Gated Dual Flow Alignment Module to directly align high-resolution feature maps and low-resolution feature maps where we term the improved version network as SFNet-Lite. Extensive experiments are conducted on several challenging datasets, where results show the effectiveness of both SFNet and SFNet-Lite. In particular, when using Cityscapes test set, the SFNet-Lite series achieve 80.1 mIoU while running at 60 FPS using ResNet-18 backbone and 78.8 mIoU while running at 120 FPS using STDC backbone on RTX-3090. Moreover, we unify four challenging driving datasets (i.e., Cityscapes, Mapillary, IDD, and BDD) into one large dataset, which we named Unified Driving Segmentation (UDS) dataset. It contains diverse domain and style information. We benchmark several representative works on UDS. Both SFNet and SFNet-Lite still achieve the best speed and accuracy trade-off on UDS, which serves as a strong baseline in such a challenging setting. The code and models are publicly available at https://github.com/lxtGH/SFSegNets .</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s11263-023-01875-x</doi><tpages>24</tpages><orcidid>https://orcid.org/0000-0001-8735-2516</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0920-5691
ispartof International journal of computer vision, 2024-02, Vol.132 (2), p.466-489
issn 0920-5691
1573-1405
language eng
recordid cdi_proquest_journals_2921179534
source Springer Nature - Complete Springer Journals
subjects Alignment
Artificial Intelligence
Computer Imaging
Computer networks
Computer Science
Datasets
Feature maps
Flow mapping
High resolution
Image Processing and Computer Vision
Modules
Optical flow (image analysis)
Pattern Recognition
Pattern Recognition and Graphics
Performance enhancement
Pyramids
Semantic segmentation
Semantics
Vision
title Sfnet: Faster and Accurate Semantic Segmentation Via Semantic Flow
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-14T20%3A36%3A50IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_proqu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Sfnet:%20Faster%20and%20Accurate%20Semantic%20Segmentation%20Via%20Semantic%20Flow&rft.jtitle=International%20journal%20of%20computer%20vision&rft.au=Li,%20Xiangtai&rft.date=2024-02-01&rft.volume=132&rft.issue=2&rft.spage=466&rft.epage=489&rft.pages=466-489&rft.issn=0920-5691&rft.eissn=1573-1405&rft_id=info:doi/10.1007/s11263-023-01875-x&rft_dat=%3Cgale_proqu%3EA781009711%3C/gale_proqu%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2921179534&rft_id=info:pmid/&rft_galeid=A781009711&rfr_iscdi=true