FuseSeg: Semantic Segmentation of Urban Scenes Based on RGB and Thermal Data Fusion
Semantic segmentation of urban scenes is an essential component in various applications of autonomous driving. It makes great progress with the rise of deep learning technologies. Most of the current semantic segmentation networks use single-modal sensory data, which are usually the RGB images produ...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on automation science and engineering 2021-07, Vol.18 (3), p.1000-1011 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 1011 |
---|---|
container_issue | 3 |
container_start_page | 1000 |
container_title | IEEE transactions on automation science and engineering |
container_volume | 18 |
creator | Sun, Yuxiang Zuo, Weixun Yun, Peng Wang, Hengli Liu, Ming |
description | Semantic segmentation of urban scenes is an essential component in various applications of autonomous driving. It makes great progress with the rise of deep learning technologies. Most of the current semantic segmentation networks use single-modal sensory data, which are usually the RGB images produced by visible cameras. However, the segmentation performance of these networks is prone to be degraded when lighting conditions are not satisfied, such as dim light or darkness. We find that thermal images produced by thermal imaging cameras are robust to challenging lighting conditions. Therefore, in this article, we propose a novel RGB and thermal data fusion network named FuseSeg to achieve superior performance of semantic segmentation in urban scenes. The experimental results demonstrate that our network outperforms the state-of-the-art networks. Note to Practitioners -This article investigates the problem of semantic segmentation of urban scenes when lighting conditions are not satisfied. We provide a solution to this problem via information fusion with RGB and thermal data. We build an end-to-end deep neural network, which takes as input a pair of RGB and thermal images and outputs pixel-wise semantic labels. Our network could be used for urban scene understanding, which serves as a fundamental component of many autonomous driving tasks, such as environment modeling, obstacle avoidance, motion prediction, and planning. Moreover, the simple design of our network allows it to be easily implemented using various deep learning frameworks, which facilitates the applications on different hardware or software platforms. |
doi_str_mv | 10.1109/TASE.2020.2993143 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2547646624</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9108585</ieee_id><sourcerecordid>2547646624</sourcerecordid><originalsourceid>FETCH-LOGICAL-c359t-c050b4b6983559b895b2f4872779d4a8c5c3146722797e178277ab4a36ab2dbc3</originalsourceid><addsrcrecordid>eNo9kF9LwzAUxYMoOKcfQHwJ-NyZv03i2za3KQwEuz2HJE1nx9rOpHvw25uy4dM9cM85l_sD4BGjCcZIvWymxWJCEEETohTFjF6BEeZcZlRIej1oxjOuOL8FdzHuESJMKjQCxfIUfeF3r7DwjWn72iWxa3zbm77uWthVcBusaWHhfOsjnJnoS5gWX6sZNG0JN98-NOYA30xvYCpLoXtwU5lD9A-XOQbb5WIzf8_Wn6uP-XSdOcpVnznEkWU2V5JyrqxU3JKKSUGEUCUz0nGXHskFIUIJj4VMC2OZobmxpLSOjsHzufcYup-Tj73ed6fQppOacCZylueEJRc-u1zoYgy-0sdQNyb8aoz0wE4P7PTATl_YpczTOVN77__9CiPJJad_ZHZoCw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2547646624</pqid></control><display><type>article</type><title>FuseSeg: Semantic Segmentation of Urban Scenes Based on RGB and Thermal Data Fusion</title><source>IEEE Electronic Library (IEL)</source><creator>Sun, Yuxiang ; Zuo, Weixun ; Yun, Peng ; Wang, Hengli ; Liu, Ming</creator><creatorcontrib>Sun, Yuxiang ; Zuo, Weixun ; Yun, Peng ; Wang, Hengli ; Liu, Ming</creatorcontrib><description>Semantic segmentation of urban scenes is an essential component in various applications of autonomous driving. It makes great progress with the rise of deep learning technologies. Most of the current semantic segmentation networks use single-modal sensory data, which are usually the RGB images produced by visible cameras. However, the segmentation performance of these networks is prone to be degraded when lighting conditions are not satisfied, such as dim light or darkness. We find that thermal images produced by thermal imaging cameras are robust to challenging lighting conditions. Therefore, in this article, we propose a novel RGB and thermal data fusion network named FuseSeg to achieve superior performance of semantic segmentation in urban scenes. The experimental results demonstrate that our network outperforms the state-of-the-art networks. Note to Practitioners -This article investigates the problem of semantic segmentation of urban scenes when lighting conditions are not satisfied. We provide a solution to this problem via information fusion with RGB and thermal data. We build an end-to-end deep neural network, which takes as input a pair of RGB and thermal images and outputs pixel-wise semantic labels. Our network could be used for urban scene understanding, which serves as a fundamental component of many autonomous driving tasks, such as environment modeling, obstacle avoidance, motion prediction, and planning. Moreover, the simple design of our network allows it to be easily implemented using various deep learning frameworks, which facilitates the applications on different hardware or software platforms.</description><identifier>ISSN: 1545-5955</identifier><identifier>EISSN: 1558-3783</identifier><identifier>DOI: 10.1109/TASE.2020.2993143</identifier><identifier>CODEN: ITASC7</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Artificial neural networks ; Autonomous driving ; Autonomous vehicles ; Cameras ; Color imagery ; Darkness ; Data integration ; Deep learning ; Environment models ; Heat detection ; Image segmentation ; information fusion ; Lighting ; Machine learning ; Obstacle avoidance ; Scene analysis ; Semantic segmentation ; Semantics ; thermal images ; Thermal imaging ; Urban areas ; urban scenes</subject><ispartof>IEEE transactions on automation science and engineering, 2021-07, Vol.18 (3), p.1000-1011</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021</rights><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c359t-c050b4b6983559b895b2f4872779d4a8c5c3146722797e178277ab4a36ab2dbc3</citedby><cites>FETCH-LOGICAL-c359t-c050b4b6983559b895b2f4872779d4a8c5c3146722797e178277ab4a36ab2dbc3</cites><orcidid>0000-0002-4500-238X ; 0000-0002-8163-267X ; 0000-0002-7704-0559 ; 0000-0001-7251-0841 ; 0000-0002-7515-9759</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9108585$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9108585$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Sun, Yuxiang</creatorcontrib><creatorcontrib>Zuo, Weixun</creatorcontrib><creatorcontrib>Yun, Peng</creatorcontrib><creatorcontrib>Wang, Hengli</creatorcontrib><creatorcontrib>Liu, Ming</creatorcontrib><title>FuseSeg: Semantic Segmentation of Urban Scenes Based on RGB and Thermal Data Fusion</title><title>IEEE transactions on automation science and engineering</title><addtitle>TASE</addtitle><description>Semantic segmentation of urban scenes is an essential component in various applications of autonomous driving. It makes great progress with the rise of deep learning technologies. Most of the current semantic segmentation networks use single-modal sensory data, which are usually the RGB images produced by visible cameras. However, the segmentation performance of these networks is prone to be degraded when lighting conditions are not satisfied, such as dim light or darkness. We find that thermal images produced by thermal imaging cameras are robust to challenging lighting conditions. Therefore, in this article, we propose a novel RGB and thermal data fusion network named FuseSeg to achieve superior performance of semantic segmentation in urban scenes. The experimental results demonstrate that our network outperforms the state-of-the-art networks. Note to Practitioners -This article investigates the problem of semantic segmentation of urban scenes when lighting conditions are not satisfied. We provide a solution to this problem via information fusion with RGB and thermal data. We build an end-to-end deep neural network, which takes as input a pair of RGB and thermal images and outputs pixel-wise semantic labels. Our network could be used for urban scene understanding, which serves as a fundamental component of many autonomous driving tasks, such as environment modeling, obstacle avoidance, motion prediction, and planning. Moreover, the simple design of our network allows it to be easily implemented using various deep learning frameworks, which facilitates the applications on different hardware or software platforms.</description><subject>Artificial neural networks</subject><subject>Autonomous driving</subject><subject>Autonomous vehicles</subject><subject>Cameras</subject><subject>Color imagery</subject><subject>Darkness</subject><subject>Data integration</subject><subject>Deep learning</subject><subject>Environment models</subject><subject>Heat detection</subject><subject>Image segmentation</subject><subject>information fusion</subject><subject>Lighting</subject><subject>Machine learning</subject><subject>Obstacle avoidance</subject><subject>Scene analysis</subject><subject>Semantic segmentation</subject><subject>Semantics</subject><subject>thermal images</subject><subject>Thermal imaging</subject><subject>Urban areas</subject><subject>urban scenes</subject><issn>1545-5955</issn><issn>1558-3783</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kF9LwzAUxYMoOKcfQHwJ-NyZv03i2za3KQwEuz2HJE1nx9rOpHvw25uy4dM9cM85l_sD4BGjCcZIvWymxWJCEEETohTFjF6BEeZcZlRIej1oxjOuOL8FdzHuESJMKjQCxfIUfeF3r7DwjWn72iWxa3zbm77uWthVcBusaWHhfOsjnJnoS5gWX6sZNG0JN98-NOYA30xvYCpLoXtwU5lD9A-XOQbb5WIzf8_Wn6uP-XSdOcpVnznEkWU2V5JyrqxU3JKKSUGEUCUz0nGXHskFIUIJj4VMC2OZobmxpLSOjsHzufcYup-Tj73ed6fQppOacCZylueEJRc-u1zoYgy-0sdQNyb8aoz0wE4P7PTATl_YpczTOVN77__9CiPJJad_ZHZoCw</recordid><startdate>20210701</startdate><enddate>20210701</enddate><creator>Sun, Yuxiang</creator><creator>Zuo, Weixun</creator><creator>Yun, Peng</creator><creator>Wang, Hengli</creator><creator>Liu, Ming</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7TB</scope><scope>8FD</scope><scope>FR3</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-4500-238X</orcidid><orcidid>https://orcid.org/0000-0002-8163-267X</orcidid><orcidid>https://orcid.org/0000-0002-7704-0559</orcidid><orcidid>https://orcid.org/0000-0001-7251-0841</orcidid><orcidid>https://orcid.org/0000-0002-7515-9759</orcidid></search><sort><creationdate>20210701</creationdate><title>FuseSeg: Semantic Segmentation of Urban Scenes Based on RGB and Thermal Data Fusion</title><author>Sun, Yuxiang ; Zuo, Weixun ; Yun, Peng ; Wang, Hengli ; Liu, Ming</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c359t-c050b4b6983559b895b2f4872779d4a8c5c3146722797e178277ab4a36ab2dbc3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Artificial neural networks</topic><topic>Autonomous driving</topic><topic>Autonomous vehicles</topic><topic>Cameras</topic><topic>Color imagery</topic><topic>Darkness</topic><topic>Data integration</topic><topic>Deep learning</topic><topic>Environment models</topic><topic>Heat detection</topic><topic>Image segmentation</topic><topic>information fusion</topic><topic>Lighting</topic><topic>Machine learning</topic><topic>Obstacle avoidance</topic><topic>Scene analysis</topic><topic>Semantic segmentation</topic><topic>Semantics</topic><topic>thermal images</topic><topic>Thermal imaging</topic><topic>Urban areas</topic><topic>urban scenes</topic><toplevel>online_resources</toplevel><creatorcontrib>Sun, Yuxiang</creatorcontrib><creatorcontrib>Zuo, Weixun</creatorcontrib><creatorcontrib>Yun, Peng</creatorcontrib><creatorcontrib>Wang, Hengli</creatorcontrib><creatorcontrib>Liu, Ming</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Mechanical & Transportation Engineering Abstracts</collection><collection>Technology Research Database</collection><collection>Engineering Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on automation science and engineering</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Sun, Yuxiang</au><au>Zuo, Weixun</au><au>Yun, Peng</au><au>Wang, Hengli</au><au>Liu, Ming</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>FuseSeg: Semantic Segmentation of Urban Scenes Based on RGB and Thermal Data Fusion</atitle><jtitle>IEEE transactions on automation science and engineering</jtitle><stitle>TASE</stitle><date>2021-07-01</date><risdate>2021</risdate><volume>18</volume><issue>3</issue><spage>1000</spage><epage>1011</epage><pages>1000-1011</pages><issn>1545-5955</issn><eissn>1558-3783</eissn><coden>ITASC7</coden><abstract>Semantic segmentation of urban scenes is an essential component in various applications of autonomous driving. It makes great progress with the rise of deep learning technologies. Most of the current semantic segmentation networks use single-modal sensory data, which are usually the RGB images produced by visible cameras. However, the segmentation performance of these networks is prone to be degraded when lighting conditions are not satisfied, such as dim light or darkness. We find that thermal images produced by thermal imaging cameras are robust to challenging lighting conditions. Therefore, in this article, we propose a novel RGB and thermal data fusion network named FuseSeg to achieve superior performance of semantic segmentation in urban scenes. The experimental results demonstrate that our network outperforms the state-of-the-art networks. Note to Practitioners -This article investigates the problem of semantic segmentation of urban scenes when lighting conditions are not satisfied. We provide a solution to this problem via information fusion with RGB and thermal data. We build an end-to-end deep neural network, which takes as input a pair of RGB and thermal images and outputs pixel-wise semantic labels. Our network could be used for urban scene understanding, which serves as a fundamental component of many autonomous driving tasks, such as environment modeling, obstacle avoidance, motion prediction, and planning. Moreover, the simple design of our network allows it to be easily implemented using various deep learning frameworks, which facilitates the applications on different hardware or software platforms.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TASE.2020.2993143</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0002-4500-238X</orcidid><orcidid>https://orcid.org/0000-0002-8163-267X</orcidid><orcidid>https://orcid.org/0000-0002-7704-0559</orcidid><orcidid>https://orcid.org/0000-0001-7251-0841</orcidid><orcidid>https://orcid.org/0000-0002-7515-9759</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1545-5955 |
ispartof | IEEE transactions on automation science and engineering, 2021-07, Vol.18 (3), p.1000-1011 |
issn | 1545-5955 1558-3783 |
language | eng |
recordid | cdi_proquest_journals_2547646624 |
source | IEEE Electronic Library (IEL) |
subjects | Artificial neural networks Autonomous driving Autonomous vehicles Cameras Color imagery Darkness Data integration Deep learning Environment models Heat detection Image segmentation information fusion Lighting Machine learning Obstacle avoidance Scene analysis Semantic segmentation Semantics thermal images Thermal imaging Urban areas urban scenes |
title | FuseSeg: Semantic Segmentation of Urban Scenes Based on RGB and Thermal Data Fusion |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-26T14%3A21%3A09IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=FuseSeg:%20Semantic%20Segmentation%20of%20Urban%20Scenes%20Based%20on%20RGB%20and%20Thermal%20Data%20Fusion&rft.jtitle=IEEE%20transactions%20on%20automation%20science%20and%20engineering&rft.au=Sun,%20Yuxiang&rft.date=2021-07-01&rft.volume=18&rft.issue=3&rft.spage=1000&rft.epage=1011&rft.pages=1000-1011&rft.issn=1545-5955&rft.eissn=1558-3783&rft.coden=ITASC7&rft_id=info:doi/10.1109/TASE.2020.2993143&rft_dat=%3Cproquest_RIE%3E2547646624%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2547646624&rft_id=info:pmid/&rft_ieee_id=9108585&rfr_iscdi=true |