Cross-Attention Spectral-Spatial Network for Hyperspectral Image Classification
Hyperspectral image (HSI) classification aims to identify categories of hyperspectral pixels. Recently, many convolutional neural networks (CNNs) have been designed to explore the spectrums and spatial information of HSI for classification. In recent CNN-based methods, 2-D or 3-D convolutions are in...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on geoscience and remote sensing 2022, Vol.60, p.1-14 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 14 |
---|---|
container_issue | |
container_start_page | 1 |
container_title | IEEE transactions on geoscience and remote sensing |
container_volume | 60 |
creator | Yang, Kai Sun, Hao Zou, Chunbo Lu, Xiaoqiang |
description | Hyperspectral image (HSI) classification aims to identify categories of hyperspectral pixels. Recently, many convolutional neural networks (CNNs) have been designed to explore the spectrums and spatial information of HSI for classification. In recent CNN-based methods, 2-D or 3-D convolutions are inevitably utilized as basic operations to extract the spatial or spectral-spatial features. However, 2-D and 3-D convolutions are sensitive to the image rotation, which may result in that recent CNN-based methods are not robust to the HSI rotation. In this article, a cross-attention spectral-spatial network (CASSN) is proposed to alleviate the problem of HSI rotation. First, a cross-spectral attention component is proposed to exploit the local and global spectrums of the pixel to generate band weight for suppressing redundant bands. Second, a spectral feature extraction component is utilized to capture spectral features. Then, a cross-spatial attention component is proposed to generate spectral-spatial features from the HSI patch under the guidance of the pixel to be classified. Finally, the spectral-spatial feature is fed to a softmax classifier to obtain the category. The effectiveness of CASSN is demonstrated on three public databases. |
doi_str_mv | 10.1109/TGRS.2021.3133582 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_9641863</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9641863</ieee_id><sourcerecordid>2633042158</sourcerecordid><originalsourceid>FETCH-LOGICAL-c293t-326f5583618a700b48a96820c68c65c07dbb471f34d92334776ab40f8301b54d3</originalsourceid><addsrcrecordid>eNo9kNFKwzAUhoMoOKcPIN4UvM48J0nT9HIU3QbDgZvXIc1S6ezWmmTI3t6WDa_Ozf9_55yPkEeECSLkL5vZx3rCgOGEI-epYldkhGmqKEghrskIMJeUqZzdkrsQdgAoUsxGZFX4NgQ6jdEdYt0eknXnbPSmoevOxNo0ybuLv63_TqrWJ_NT53y4JJLF3ny5pGhMCHVVWzP078lNZZrgHi5zTD7fXjfFnC5Xs0UxXVLLch4pZ7Lqr-MSlckASqFMLhUDK5WVqYVsW5Yiw4qLbc44F1kmTSmgUhywTMWWj8nzmdv59ufoQtS79ugP_UrNJOcgGPb4McFzyg5felfpztd7408aQQ_e9OBND970xVvfeTp3aufcfz6XAlUP_gNwE2hb</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2633042158</pqid></control><display><type>article</type><title>Cross-Attention Spectral-Spatial Network for Hyperspectral Image Classification</title><source>IEEE Electronic Library (IEL)</source><creator>Yang, Kai ; Sun, Hao ; Zou, Chunbo ; Lu, Xiaoqiang</creator><creatorcontrib>Yang, Kai ; Sun, Hao ; Zou, Chunbo ; Lu, Xiaoqiang</creatorcontrib><description>Hyperspectral image (HSI) classification aims to identify categories of hyperspectral pixels. Recently, many convolutional neural networks (CNNs) have been designed to explore the spectrums and spatial information of HSI for classification. In recent CNN-based methods, 2-D or 3-D convolutions are inevitably utilized as basic operations to extract the spatial or spectral-spatial features. However, 2-D and 3-D convolutions are sensitive to the image rotation, which may result in that recent CNN-based methods are not robust to the HSI rotation. In this article, a cross-attention spectral-spatial network (CASSN) is proposed to alleviate the problem of HSI rotation. First, a cross-spectral attention component is proposed to exploit the local and global spectrums of the pixel to generate band weight for suppressing redundant bands. Second, a spectral feature extraction component is utilized to capture spectral features. Then, a cross-spatial attention component is proposed to generate spectral-spatial features from the HSI patch under the guidance of the pixel to be classified. Finally, the spectral-spatial feature is fed to a softmax classifier to obtain the category. The effectiveness of CASSN is demonstrated on three public databases.</description><identifier>ISSN: 0196-2892</identifier><identifier>EISSN: 1558-0644</identifier><identifier>DOI: 10.1109/TGRS.2021.3133582</identifier><identifier>CODEN: IGRSD2</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Artificial neural networks ; Classification ; Convolutional neural networks ; Convolutional neural networks (CNNs) ; Correlation ; Feature extraction ; hyperspectral image (HSI) classification ; Hyperspectral imaging ; Image classification ; Image rotation ; Imaging ; Methods ; Neural networks ; Pixels ; Rotation ; spatial attention ; Spatial data ; Spatial databases ; Spectra ; spectral attention ; Sun</subject><ispartof>IEEE transactions on geoscience and remote sensing, 2022, Vol.60, p.1-14</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c293t-326f5583618a700b48a96820c68c65c07dbb471f34d92334776ab40f8301b54d3</citedby><cites>FETCH-LOGICAL-c293t-326f5583618a700b48a96820c68c65c07dbb471f34d92334776ab40f8301b54d3</cites><orcidid>0000-0002-7037-5188 ; 0000-0002-1314-4957</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9641863$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,4009,27902,27903,27904,54736</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9641863$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Yang, Kai</creatorcontrib><creatorcontrib>Sun, Hao</creatorcontrib><creatorcontrib>Zou, Chunbo</creatorcontrib><creatorcontrib>Lu, Xiaoqiang</creatorcontrib><title>Cross-Attention Spectral-Spatial Network for Hyperspectral Image Classification</title><title>IEEE transactions on geoscience and remote sensing</title><addtitle>TGRS</addtitle><description>Hyperspectral image (HSI) classification aims to identify categories of hyperspectral pixels. Recently, many convolutional neural networks (CNNs) have been designed to explore the spectrums and spatial information of HSI for classification. In recent CNN-based methods, 2-D or 3-D convolutions are inevitably utilized as basic operations to extract the spatial or spectral-spatial features. However, 2-D and 3-D convolutions are sensitive to the image rotation, which may result in that recent CNN-based methods are not robust to the HSI rotation. In this article, a cross-attention spectral-spatial network (CASSN) is proposed to alleviate the problem of HSI rotation. First, a cross-spectral attention component is proposed to exploit the local and global spectrums of the pixel to generate band weight for suppressing redundant bands. Second, a spectral feature extraction component is utilized to capture spectral features. Then, a cross-spatial attention component is proposed to generate spectral-spatial features from the HSI patch under the guidance of the pixel to be classified. Finally, the spectral-spatial feature is fed to a softmax classifier to obtain the category. The effectiveness of CASSN is demonstrated on three public databases.</description><subject>Artificial neural networks</subject><subject>Classification</subject><subject>Convolutional neural networks</subject><subject>Convolutional neural networks (CNNs)</subject><subject>Correlation</subject><subject>Feature extraction</subject><subject>hyperspectral image (HSI) classification</subject><subject>Hyperspectral imaging</subject><subject>Image classification</subject><subject>Image rotation</subject><subject>Imaging</subject><subject>Methods</subject><subject>Neural networks</subject><subject>Pixels</subject><subject>Rotation</subject><subject>spatial attention</subject><subject>Spatial data</subject><subject>Spatial databases</subject><subject>Spectra</subject><subject>spectral attention</subject><subject>Sun</subject><issn>0196-2892</issn><issn>1558-0644</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kNFKwzAUhoMoOKcPIN4UvM48J0nT9HIU3QbDgZvXIc1S6ezWmmTI3t6WDa_Ozf9_55yPkEeECSLkL5vZx3rCgOGEI-epYldkhGmqKEghrskIMJeUqZzdkrsQdgAoUsxGZFX4NgQ6jdEdYt0eknXnbPSmoevOxNo0ybuLv63_TqrWJ_NT53y4JJLF3ny5pGhMCHVVWzP078lNZZrgHi5zTD7fXjfFnC5Xs0UxXVLLch4pZ7Lqr-MSlckASqFMLhUDK5WVqYVsW5Yiw4qLbc44F1kmTSmgUhywTMWWj8nzmdv59ufoQtS79ugP_UrNJOcgGPb4McFzyg5felfpztd7408aQQ_e9OBND970xVvfeTp3aufcfz6XAlUP_gNwE2hb</recordid><startdate>2022</startdate><enddate>2022</enddate><creator>Yang, Kai</creator><creator>Sun, Hao</creator><creator>Zou, Chunbo</creator><creator>Lu, Xiaoqiang</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7UA</scope><scope>8FD</scope><scope>C1K</scope><scope>F1W</scope><scope>FR3</scope><scope>H8D</scope><scope>H96</scope><scope>KR7</scope><scope>L.G</scope><scope>L7M</scope><orcidid>https://orcid.org/0000-0002-7037-5188</orcidid><orcidid>https://orcid.org/0000-0002-1314-4957</orcidid></search><sort><creationdate>2022</creationdate><title>Cross-Attention Spectral-Spatial Network for Hyperspectral Image Classification</title><author>Yang, Kai ; Sun, Hao ; Zou, Chunbo ; Lu, Xiaoqiang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c293t-326f5583618a700b48a96820c68c65c07dbb471f34d92334776ab40f8301b54d3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Artificial neural networks</topic><topic>Classification</topic><topic>Convolutional neural networks</topic><topic>Convolutional neural networks (CNNs)</topic><topic>Correlation</topic><topic>Feature extraction</topic><topic>hyperspectral image (HSI) classification</topic><topic>Hyperspectral imaging</topic><topic>Image classification</topic><topic>Image rotation</topic><topic>Imaging</topic><topic>Methods</topic><topic>Neural networks</topic><topic>Pixels</topic><topic>Rotation</topic><topic>spatial attention</topic><topic>Spatial data</topic><topic>Spatial databases</topic><topic>Spectra</topic><topic>spectral attention</topic><topic>Sun</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Yang, Kai</creatorcontrib><creatorcontrib>Sun, Hao</creatorcontrib><creatorcontrib>Zou, Chunbo</creatorcontrib><creatorcontrib>Lu, Xiaoqiang</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Water Resources Abstracts</collection><collection>Technology Research Database</collection><collection>Environmental Sciences and Pollution Management</collection><collection>ASFA: Aquatic Sciences and Fisheries Abstracts</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>Aquatic Science & Fisheries Abstracts (ASFA) 2: Ocean Technology, Policy & Non-Living Resources</collection><collection>Civil Engineering Abstracts</collection><collection>Aquatic Science & Fisheries Abstracts (ASFA) Professional</collection><collection>Advanced Technologies Database with Aerospace</collection><jtitle>IEEE transactions on geoscience and remote sensing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Yang, Kai</au><au>Sun, Hao</au><au>Zou, Chunbo</au><au>Lu, Xiaoqiang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Cross-Attention Spectral-Spatial Network for Hyperspectral Image Classification</atitle><jtitle>IEEE transactions on geoscience and remote sensing</jtitle><stitle>TGRS</stitle><date>2022</date><risdate>2022</risdate><volume>60</volume><spage>1</spage><epage>14</epage><pages>1-14</pages><issn>0196-2892</issn><eissn>1558-0644</eissn><coden>IGRSD2</coden><abstract>Hyperspectral image (HSI) classification aims to identify categories of hyperspectral pixels. Recently, many convolutional neural networks (CNNs) have been designed to explore the spectrums and spatial information of HSI for classification. In recent CNN-based methods, 2-D or 3-D convolutions are inevitably utilized as basic operations to extract the spatial or spectral-spatial features. However, 2-D and 3-D convolutions are sensitive to the image rotation, which may result in that recent CNN-based methods are not robust to the HSI rotation. In this article, a cross-attention spectral-spatial network (CASSN) is proposed to alleviate the problem of HSI rotation. First, a cross-spectral attention component is proposed to exploit the local and global spectrums of the pixel to generate band weight for suppressing redundant bands. Second, a spectral feature extraction component is utilized to capture spectral features. Then, a cross-spatial attention component is proposed to generate spectral-spatial features from the HSI patch under the guidance of the pixel to be classified. Finally, the spectral-spatial feature is fed to a softmax classifier to obtain the category. The effectiveness of CASSN is demonstrated on three public databases.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TGRS.2021.3133582</doi><tpages>14</tpages><orcidid>https://orcid.org/0000-0002-7037-5188</orcidid><orcidid>https://orcid.org/0000-0002-1314-4957</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 0196-2892 |
ispartof | IEEE transactions on geoscience and remote sensing, 2022, Vol.60, p.1-14 |
issn | 0196-2892 1558-0644 |
language | eng |
recordid | cdi_ieee_primary_9641863 |
source | IEEE Electronic Library (IEL) |
subjects | Artificial neural networks Classification Convolutional neural networks Convolutional neural networks (CNNs) Correlation Feature extraction hyperspectral image (HSI) classification Hyperspectral imaging Image classification Image rotation Imaging Methods Neural networks Pixels Rotation spatial attention Spatial data Spatial databases Spectra spectral attention Sun |
title | Cross-Attention Spectral-Spatial Network for Hyperspectral Image Classification |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-26T18%3A02%3A58IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Cross-Attention%20Spectral-Spatial%20Network%20for%20Hyperspectral%20Image%20Classification&rft.jtitle=IEEE%20transactions%20on%20geoscience%20and%20remote%20sensing&rft.au=Yang,%20Kai&rft.date=2022&rft.volume=60&rft.spage=1&rft.epage=14&rft.pages=1-14&rft.issn=0196-2892&rft.eissn=1558-0644&rft.coden=IGRSD2&rft_id=info:doi/10.1109/TGRS.2021.3133582&rft_dat=%3Cproquest_RIE%3E2633042158%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2633042158&rft_id=info:pmid/&rft_ieee_id=9641863&rfr_iscdi=true |