LANet: Local Attention Embedding to Improve the Semantic Segmentation of Remote Sensing Images

The trade-off between feature representation power and spatial localization accuracy is crucial for the dense classification/semantic segmentation of remote sensing images (RSIs). High-level features extracted from the late layers of a neural network are rich in semantic information, yet have blurre...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on geoscience and remote sensing 2021-01, Vol.59 (1), p.426-435
Hauptverfasser: Ding, Lei, Tang, Hao, Bruzzone, Lorenzo
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 435
container_issue 1
container_start_page 426
container_title IEEE transactions on geoscience and remote sensing
container_volume 59
creator Ding, Lei
Tang, Hao
Bruzzone, Lorenzo
description The trade-off between feature representation power and spatial localization accuracy is crucial for the dense classification/semantic segmentation of remote sensing images (RSIs). High-level features extracted from the late layers of a neural network are rich in semantic information, yet have blurred spatial details; low-level features extracted from the early layers of a network contain more pixel-level information but are isolated and noisy. It is therefore difficult to bridge the gap between high- and low-level features due to their difference in terms of physical information content and spatial distribution. In this article, we contribute to solve this problem by enhancing the feature representation in two ways. On the one hand, a patch attention module (PAM) is proposed to enhance the embedding of context information based on a patchwise calculation of local attention. On the other hand, an attention embedding module (AEM) is proposed to enrich the semantic information of low-level features by embedding local focus from high-level features. Both proposed modules are lightweight and can be applied to process the extracted features of convolutional neural networks (CNNs). Experiments show that, by integrating the proposed modules into a baseline fully convolutional network (FCN), the resulting local attention network (LANet) greatly improves the performance over the baseline and outperforms other attention-based methods on two RSI data sets.
doi_str_mv 10.1109/TGRS.2020.2994150
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2473271537</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9102424</ieee_id><sourcerecordid>2473271537</sourcerecordid><originalsourceid>FETCH-LOGICAL-c293t-11566db8fd35e844301a0cb1a97f32fd1ddaa0d836018af1f4c4499ed9260f903</originalsourceid><addsrcrecordid>eNo9kE1LAzEQhoMoWKs_QLwEPG-dSbIf8VZKrYWi0NarId0kdUt3Uzep4L9314qnGZjnnRkeQm4RRoggH9az5WrEgMGISSkwhTMywDQtEsiEOCcDQJklrJDsklyFsANAkWI-IO-L8YuNj3ThS72n4xhtEyvf0Gm9scZUzZZGT-f1ofVflsYPS1e21h1Sds227mD9i3tHl7b2sZ83oY_Na7214ZpcOL0P9uavDsnb03Q9eU4Wr7P5ZLxISiZ5TBDTLDObwhme2kIIDqih3KCWuePMGTRGazAFzwAL7dCJUggprZEsAyeBD8n9aW_36OfRhqh2_tg23UnFRM5ZjinPOwpPVNn6EFrr1KGtat1-KwTVa1S9RtVrVH8au8zdKVNZa_95icAEE_wHNI5tyA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2473271537</pqid></control><display><type>article</type><title>LANet: Local Attention Embedding to Improve the Semantic Segmentation of Remote Sensing Images</title><source>IEEE Electronic Library (IEL)</source><creator>Ding, Lei ; Tang, Hao ; Bruzzone, Lorenzo</creator><creatorcontrib>Ding, Lei ; Tang, Hao ; Bruzzone, Lorenzo</creatorcontrib><description>The trade-off between feature representation power and spatial localization accuracy is crucial for the dense classification/semantic segmentation of remote sensing images (RSIs). High-level features extracted from the late layers of a neural network are rich in semantic information, yet have blurred spatial details; low-level features extracted from the early layers of a network contain more pixel-level information but are isolated and noisy. It is therefore difficult to bridge the gap between high- and low-level features due to their difference in terms of physical information content and spatial distribution. In this article, we contribute to solve this problem by enhancing the feature representation in two ways. On the one hand, a patch attention module (PAM) is proposed to enhance the embedding of context information based on a patchwise calculation of local attention. On the other hand, an attention embedding module (AEM) is proposed to enrich the semantic information of low-level features by embedding local focus from high-level features. Both proposed modules are lightweight and can be applied to process the extracted features of convolutional neural networks (CNNs). Experiments show that, by integrating the proposed modules into a baseline fully convolutional network (FCN), the resulting local attention network (LANet) greatly improves the performance over the baseline and outperforms other attention-based methods on two RSI data sets.</description><identifier>ISSN: 0196-2892</identifier><identifier>EISSN: 1558-0644</identifier><identifier>DOI: 10.1109/TGRS.2020.2994150</identifier><identifier>CODEN: IGRSD2</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Artificial neural networks ; Convolutional neural network (CNN) ; Convolutional neural networks ; Correlation ; Decoding ; deep learning ; Embedding ; Feature extraction ; Geographical distribution ; Image classification ; Image processing ; Image segmentation ; Localization ; Modules ; Neural networks ; Remote sensing ; Representations ; Semantic segmentation ; Semantics ; Spatial discrimination ; Spatial distribution</subject><ispartof>IEEE transactions on geoscience and remote sensing, 2021-01, Vol.59 (1), p.426-435</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c293t-11566db8fd35e844301a0cb1a97f32fd1ddaa0d836018af1f4c4499ed9260f903</citedby><cites>FETCH-LOGICAL-c293t-11566db8fd35e844301a0cb1a97f32fd1ddaa0d836018af1f4c4499ed9260f903</cites><orcidid>0000-0002-6036-459X ; 0000-0002-2077-1246 ; 0000-0003-0653-8373</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9102424$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27922,27923,54756</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9102424$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Ding, Lei</creatorcontrib><creatorcontrib>Tang, Hao</creatorcontrib><creatorcontrib>Bruzzone, Lorenzo</creatorcontrib><title>LANet: Local Attention Embedding to Improve the Semantic Segmentation of Remote Sensing Images</title><title>IEEE transactions on geoscience and remote sensing</title><addtitle>TGRS</addtitle><description>The trade-off between feature representation power and spatial localization accuracy is crucial for the dense classification/semantic segmentation of remote sensing images (RSIs). High-level features extracted from the late layers of a neural network are rich in semantic information, yet have blurred spatial details; low-level features extracted from the early layers of a network contain more pixel-level information but are isolated and noisy. It is therefore difficult to bridge the gap between high- and low-level features due to their difference in terms of physical information content and spatial distribution. In this article, we contribute to solve this problem by enhancing the feature representation in two ways. On the one hand, a patch attention module (PAM) is proposed to enhance the embedding of context information based on a patchwise calculation of local attention. On the other hand, an attention embedding module (AEM) is proposed to enrich the semantic information of low-level features by embedding local focus from high-level features. Both proposed modules are lightweight and can be applied to process the extracted features of convolutional neural networks (CNNs). Experiments show that, by integrating the proposed modules into a baseline fully convolutional network (FCN), the resulting local attention network (LANet) greatly improves the performance over the baseline and outperforms other attention-based methods on two RSI data sets.</description><subject>Artificial neural networks</subject><subject>Convolutional neural network (CNN)</subject><subject>Convolutional neural networks</subject><subject>Correlation</subject><subject>Decoding</subject><subject>deep learning</subject><subject>Embedding</subject><subject>Feature extraction</subject><subject>Geographical distribution</subject><subject>Image classification</subject><subject>Image processing</subject><subject>Image segmentation</subject><subject>Localization</subject><subject>Modules</subject><subject>Neural networks</subject><subject>Remote sensing</subject><subject>Representations</subject><subject>Semantic segmentation</subject><subject>Semantics</subject><subject>Spatial discrimination</subject><subject>Spatial distribution</subject><issn>0196-2892</issn><issn>1558-0644</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kE1LAzEQhoMoWKs_QLwEPG-dSbIf8VZKrYWi0NarId0kdUt3Uzep4L9314qnGZjnnRkeQm4RRoggH9az5WrEgMGISSkwhTMywDQtEsiEOCcDQJklrJDsklyFsANAkWI-IO-L8YuNj3ThS72n4xhtEyvf0Gm9scZUzZZGT-f1ofVflsYPS1e21h1Sds227mD9i3tHl7b2sZ83oY_Na7214ZpcOL0P9uavDsnb03Q9eU4Wr7P5ZLxISiZ5TBDTLDObwhme2kIIDqih3KCWuePMGTRGazAFzwAL7dCJUggprZEsAyeBD8n9aW_36OfRhqh2_tg23UnFRM5ZjinPOwpPVNn6EFrr1KGtat1-KwTVa1S9RtVrVH8au8zdKVNZa_95icAEE_wHNI5tyA</recordid><startdate>202101</startdate><enddate>202101</enddate><creator>Ding, Lei</creator><creator>Tang, Hao</creator><creator>Bruzzone, Lorenzo</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7UA</scope><scope>8FD</scope><scope>C1K</scope><scope>F1W</scope><scope>FR3</scope><scope>H8D</scope><scope>H96</scope><scope>KR7</scope><scope>L.G</scope><scope>L7M</scope><orcidid>https://orcid.org/0000-0002-6036-459X</orcidid><orcidid>https://orcid.org/0000-0002-2077-1246</orcidid><orcidid>https://orcid.org/0000-0003-0653-8373</orcidid></search><sort><creationdate>202101</creationdate><title>LANet: Local Attention Embedding to Improve the Semantic Segmentation of Remote Sensing Images</title><author>Ding, Lei ; Tang, Hao ; Bruzzone, Lorenzo</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c293t-11566db8fd35e844301a0cb1a97f32fd1ddaa0d836018af1f4c4499ed9260f903</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Artificial neural networks</topic><topic>Convolutional neural network (CNN)</topic><topic>Convolutional neural networks</topic><topic>Correlation</topic><topic>Decoding</topic><topic>deep learning</topic><topic>Embedding</topic><topic>Feature extraction</topic><topic>Geographical distribution</topic><topic>Image classification</topic><topic>Image processing</topic><topic>Image segmentation</topic><topic>Localization</topic><topic>Modules</topic><topic>Neural networks</topic><topic>Remote sensing</topic><topic>Representations</topic><topic>Semantic segmentation</topic><topic>Semantics</topic><topic>Spatial discrimination</topic><topic>Spatial distribution</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Ding, Lei</creatorcontrib><creatorcontrib>Tang, Hao</creatorcontrib><creatorcontrib>Bruzzone, Lorenzo</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Water Resources Abstracts</collection><collection>Technology Research Database</collection><collection>Environmental Sciences and Pollution Management</collection><collection>ASFA: Aquatic Sciences and Fisheries Abstracts</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>Aquatic Science &amp; Fisheries Abstracts (ASFA) 2: Ocean Technology, Policy &amp; Non-Living Resources</collection><collection>Civil Engineering Abstracts</collection><collection>Aquatic Science &amp; Fisheries Abstracts (ASFA) Professional</collection><collection>Advanced Technologies Database with Aerospace</collection><jtitle>IEEE transactions on geoscience and remote sensing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Ding, Lei</au><au>Tang, Hao</au><au>Bruzzone, Lorenzo</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>LANet: Local Attention Embedding to Improve the Semantic Segmentation of Remote Sensing Images</atitle><jtitle>IEEE transactions on geoscience and remote sensing</jtitle><stitle>TGRS</stitle><date>2021-01</date><risdate>2021</risdate><volume>59</volume><issue>1</issue><spage>426</spage><epage>435</epage><pages>426-435</pages><issn>0196-2892</issn><eissn>1558-0644</eissn><coden>IGRSD2</coden><abstract>The trade-off between feature representation power and spatial localization accuracy is crucial for the dense classification/semantic segmentation of remote sensing images (RSIs). High-level features extracted from the late layers of a neural network are rich in semantic information, yet have blurred spatial details; low-level features extracted from the early layers of a network contain more pixel-level information but are isolated and noisy. It is therefore difficult to bridge the gap between high- and low-level features due to their difference in terms of physical information content and spatial distribution. In this article, we contribute to solve this problem by enhancing the feature representation in two ways. On the one hand, a patch attention module (PAM) is proposed to enhance the embedding of context information based on a patchwise calculation of local attention. On the other hand, an attention embedding module (AEM) is proposed to enrich the semantic information of low-level features by embedding local focus from high-level features. Both proposed modules are lightweight and can be applied to process the extracted features of convolutional neural networks (CNNs). Experiments show that, by integrating the proposed modules into a baseline fully convolutional network (FCN), the resulting local attention network (LANet) greatly improves the performance over the baseline and outperforms other attention-based methods on two RSI data sets.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TGRS.2020.2994150</doi><tpages>10</tpages><orcidid>https://orcid.org/0000-0002-6036-459X</orcidid><orcidid>https://orcid.org/0000-0002-2077-1246</orcidid><orcidid>https://orcid.org/0000-0003-0653-8373</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 0196-2892
ispartof IEEE transactions on geoscience and remote sensing, 2021-01, Vol.59 (1), p.426-435
issn 0196-2892
1558-0644
language eng
recordid cdi_proquest_journals_2473271537
source IEEE Electronic Library (IEL)
subjects Artificial neural networks
Convolutional neural network (CNN)
Convolutional neural networks
Correlation
Decoding
deep learning
Embedding
Feature extraction
Geographical distribution
Image classification
Image processing
Image segmentation
Localization
Modules
Neural networks
Remote sensing
Representations
Semantic segmentation
Semantics
Spatial discrimination
Spatial distribution
title LANet: Local Attention Embedding to Improve the Semantic Segmentation of Remote Sensing Images
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-10T06%3A36%3A59IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=LANet:%20Local%20Attention%20Embedding%20to%20Improve%20the%20Semantic%20Segmentation%20of%20Remote%20Sensing%20Images&rft.jtitle=IEEE%20transactions%20on%20geoscience%20and%20remote%20sensing&rft.au=Ding,%20Lei&rft.date=2021-01&rft.volume=59&rft.issue=1&rft.spage=426&rft.epage=435&rft.pages=426-435&rft.issn=0196-2892&rft.eissn=1558-0644&rft.coden=IGRSD2&rft_id=info:doi/10.1109/TGRS.2020.2994150&rft_dat=%3Cproquest_RIE%3E2473271537%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2473271537&rft_id=info:pmid/&rft_ieee_id=9102424&rfr_iscdi=true