AMFAN: Adaptive Multiscale Feature Attention Network for Hyperspectral Image Classification
Recently, convolutional neural networks (CNNs) have been widely used in hyperspectral image (HSI) classification with appreciable performance. However, the current CNN-based HSI classification methods have limitations in exploiting the multiscale features and extracting sufficiently discriminative f...
Gespeichert in:
Veröffentlicht in: | IEEE geoscience and remote sensing letters 2022, Vol.19, p.1-5 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 5 |
---|---|
container_issue | |
container_start_page | 1 |
container_title | IEEE geoscience and remote sensing letters |
container_volume | 19 |
creator | Zhang, Shichao Zhang, Jiahua Xun, Lan Wang, Jingwen Zhang, Da Wu, Zhenjiang |
description | Recently, convolutional neural networks (CNNs) have been widely used in hyperspectral image (HSI) classification with appreciable performance. However, the current CNN-based HSI classification methods have limitations in exploiting the multiscale features and extracting sufficiently discriminative features, and usually, adopted dimensionality reduction method such as principal component analysis (PCA) leads to some or all of the physical information of the original band may be lost. To address the above problems, in this letter, we propose an adaptive multiscale feature attention network (AMFAN) for HSI classification. First, we use a band selection algorithm to perform data dimensionality reduction, which helps maintain the original characteristics of the image. Second, different from existing multiscale feature extraction methods that give features of different scales the same degree of importance, we propose an adaptive multiscale feature residual module (AMFRM) to give multiscale features different importance. Finally, due to the input of the HSI classification model based on deep learning (DL) being the patch cube, the only available initial information is the category of the center pixel. However, the patch often contains pixels different from the center pixel category, and existing attention mechanisms do not consider the impact of such pixels on the HSI classification, so we design a novel position attention module (PAM) to calculate the similarity between the center (target) pixel and surrounding pixels and then pay more attention to the pixels with high similarity to the center pixel. Besides, we also use a spectral attention module (SAM) to obtain more discriminative spectral features. Experimental results show that the proposed AMFAN effectively improves the classification accuracy and outperforms the state-of-the-art CNNs. |
doi_str_mv | 10.1109/LGRS.2022.3193488 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_crossref_primary_10_1109_LGRS_2022_3193488</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9837928</ieee_id><sourcerecordid>2697563403</sourcerecordid><originalsourceid>FETCH-LOGICAL-c293t-e31dcdceb9f21d9be531d360f94271e2008b104e66493565b3fc4e63c12aa8793</originalsourceid><addsrcrecordid>eNo9kFtLwzAUx4MoOKcfQHwJ-NyZS9MmvpXhLrBN8AKCDyFNT6WzrjVJlX17WzZ8Ohd-_3Pgh9A1JRNKibpbzZ-eJ4wwNuFU8VjKEzSiQsiIiJSeDn0sIqHk2zm68H5LCOuZdITes_Us29zjrDBtqH4Ar7s6VN6aGvAMTOgc4CwE2IWq2eENhN_GfeKycXixb8H5FmxwpsbLL_MBeFob76uysmbAL9FZaWoPV8c6Rq-zh5fpIlo9zpfTbBVZpniIgNPCFhZyVTJaqBxEv-AJKVXMUgqMEJlTEkOSxIqLROS8tP3ELWXGyFTxMbo93G1d892BD3rbdG7Xv9QsUalIeEx4T9EDZV3jvYNSt676Mm6vKdGDQz041INDfXTYZ24OmQoA_nkleaqY5H96Y21p</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2697563403</pqid></control><display><type>article</type><title>AMFAN: Adaptive Multiscale Feature Attention Network for Hyperspectral Image Classification</title><source>IEEE Electronic Library (IEL)</source><creator>Zhang, Shichao ; Zhang, Jiahua ; Xun, Lan ; Wang, Jingwen ; Zhang, Da ; Wu, Zhenjiang</creator><creatorcontrib>Zhang, Shichao ; Zhang, Jiahua ; Xun, Lan ; Wang, Jingwen ; Zhang, Da ; Wu, Zhenjiang</creatorcontrib><description>Recently, convolutional neural networks (CNNs) have been widely used in hyperspectral image (HSI) classification with appreciable performance. However, the current CNN-based HSI classification methods have limitations in exploiting the multiscale features and extracting sufficiently discriminative features, and usually, adopted dimensionality reduction method such as principal component analysis (PCA) leads to some or all of the physical information of the original band may be lost. To address the above problems, in this letter, we propose an adaptive multiscale feature attention network (AMFAN) for HSI classification. First, we use a band selection algorithm to perform data dimensionality reduction, which helps maintain the original characteristics of the image. Second, different from existing multiscale feature extraction methods that give features of different scales the same degree of importance, we propose an adaptive multiscale feature residual module (AMFRM) to give multiscale features different importance. Finally, due to the input of the HSI classification model based on deep learning (DL) being the patch cube, the only available initial information is the category of the center pixel. However, the patch often contains pixels different from the center pixel category, and existing attention mechanisms do not consider the impact of such pixels on the HSI classification, so we design a novel position attention module (PAM) to calculate the similarity between the center (target) pixel and surrounding pixels and then pay more attention to the pixels with high similarity to the center pixel. Besides, we also use a spectral attention module (SAM) to obtain more discriminative spectral features. Experimental results show that the proposed AMFAN effectively improves the classification accuracy and outperforms the state-of-the-art CNNs.</description><identifier>ISSN: 1545-598X</identifier><identifier>EISSN: 1558-0571</identifier><identifier>DOI: 10.1109/LGRS.2022.3193488</identifier><identifier>CODEN: IGRSBY</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Adaptive multiscale feature attention network (AMFAN) ; Algorithms ; Artificial neural networks ; band selection ; Classification ; Classification algorithms ; Deep learning ; deep learning (DL) ; Dimensionality reduction ; Feature extraction ; hyperspectral image (HSI) classification ; Hyperspectral imaging ; Image classification ; IP networks ; Machine learning ; Methods ; Modules ; Neural networks ; Pixels ; Principal components analysis ; Reduction ; Semantics ; Similarity ; Solid modeling ; Spatial resolution</subject><ispartof>IEEE geoscience and remote sensing letters, 2022, Vol.19, p.1-5</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c293t-e31dcdceb9f21d9be531d360f94271e2008b104e66493565b3fc4e63c12aa8793</citedby><cites>FETCH-LOGICAL-c293t-e31dcdceb9f21d9be531d360f94271e2008b104e66493565b3fc4e63c12aa8793</cites><orcidid>0000-0001-6329-5537 ; 0000-0002-2894-9627 ; 0000-0002-5972-5474</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9837928$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,4010,27900,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9837928$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Zhang, Shichao</creatorcontrib><creatorcontrib>Zhang, Jiahua</creatorcontrib><creatorcontrib>Xun, Lan</creatorcontrib><creatorcontrib>Wang, Jingwen</creatorcontrib><creatorcontrib>Zhang, Da</creatorcontrib><creatorcontrib>Wu, Zhenjiang</creatorcontrib><title>AMFAN: Adaptive Multiscale Feature Attention Network for Hyperspectral Image Classification</title><title>IEEE geoscience and remote sensing letters</title><addtitle>LGRS</addtitle><description>Recently, convolutional neural networks (CNNs) have been widely used in hyperspectral image (HSI) classification with appreciable performance. However, the current CNN-based HSI classification methods have limitations in exploiting the multiscale features and extracting sufficiently discriminative features, and usually, adopted dimensionality reduction method such as principal component analysis (PCA) leads to some or all of the physical information of the original band may be lost. To address the above problems, in this letter, we propose an adaptive multiscale feature attention network (AMFAN) for HSI classification. First, we use a band selection algorithm to perform data dimensionality reduction, which helps maintain the original characteristics of the image. Second, different from existing multiscale feature extraction methods that give features of different scales the same degree of importance, we propose an adaptive multiscale feature residual module (AMFRM) to give multiscale features different importance. Finally, due to the input of the HSI classification model based on deep learning (DL) being the patch cube, the only available initial information is the category of the center pixel. However, the patch often contains pixels different from the center pixel category, and existing attention mechanisms do not consider the impact of such pixels on the HSI classification, so we design a novel position attention module (PAM) to calculate the similarity between the center (target) pixel and surrounding pixels and then pay more attention to the pixels with high similarity to the center pixel. Besides, we also use a spectral attention module (SAM) to obtain more discriminative spectral features. Experimental results show that the proposed AMFAN effectively improves the classification accuracy and outperforms the state-of-the-art CNNs.</description><subject>Adaptive multiscale feature attention network (AMFAN)</subject><subject>Algorithms</subject><subject>Artificial neural networks</subject><subject>band selection</subject><subject>Classification</subject><subject>Classification algorithms</subject><subject>Deep learning</subject><subject>deep learning (DL)</subject><subject>Dimensionality reduction</subject><subject>Feature extraction</subject><subject>hyperspectral image (HSI) classification</subject><subject>Hyperspectral imaging</subject><subject>Image classification</subject><subject>IP networks</subject><subject>Machine learning</subject><subject>Methods</subject><subject>Modules</subject><subject>Neural networks</subject><subject>Pixels</subject><subject>Principal components analysis</subject><subject>Reduction</subject><subject>Semantics</subject><subject>Similarity</subject><subject>Solid modeling</subject><subject>Spatial resolution</subject><issn>1545-598X</issn><issn>1558-0571</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kFtLwzAUx4MoOKcfQHwJ-NyZS9MmvpXhLrBN8AKCDyFNT6WzrjVJlX17WzZ8Ohd-_3Pgh9A1JRNKibpbzZ-eJ4wwNuFU8VjKEzSiQsiIiJSeDn0sIqHk2zm68H5LCOuZdITes_Us29zjrDBtqH4Ar7s6VN6aGvAMTOgc4CwE2IWq2eENhN_GfeKycXixb8H5FmxwpsbLL_MBeFob76uysmbAL9FZaWoPV8c6Rq-zh5fpIlo9zpfTbBVZpniIgNPCFhZyVTJaqBxEv-AJKVXMUgqMEJlTEkOSxIqLROS8tP3ELWXGyFTxMbo93G1d892BD3rbdG7Xv9QsUalIeEx4T9EDZV3jvYNSt676Mm6vKdGDQz041INDfXTYZ24OmQoA_nkleaqY5H96Y21p</recordid><startdate>2022</startdate><enddate>2022</enddate><creator>Zhang, Shichao</creator><creator>Zhang, Jiahua</creator><creator>Xun, Lan</creator><creator>Wang, Jingwen</creator><creator>Zhang, Da</creator><creator>Wu, Zhenjiang</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7TG</scope><scope>7UA</scope><scope>8FD</scope><scope>C1K</scope><scope>F1W</scope><scope>FR3</scope><scope>H8D</scope><scope>H96</scope><scope>JQ2</scope><scope>KL.</scope><scope>KR7</scope><scope>L.G</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0001-6329-5537</orcidid><orcidid>https://orcid.org/0000-0002-2894-9627</orcidid><orcidid>https://orcid.org/0000-0002-5972-5474</orcidid></search><sort><creationdate>2022</creationdate><title>AMFAN: Adaptive Multiscale Feature Attention Network for Hyperspectral Image Classification</title><author>Zhang, Shichao ; Zhang, Jiahua ; Xun, Lan ; Wang, Jingwen ; Zhang, Da ; Wu, Zhenjiang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c293t-e31dcdceb9f21d9be531d360f94271e2008b104e66493565b3fc4e63c12aa8793</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Adaptive multiscale feature attention network (AMFAN)</topic><topic>Algorithms</topic><topic>Artificial neural networks</topic><topic>band selection</topic><topic>Classification</topic><topic>Classification algorithms</topic><topic>Deep learning</topic><topic>deep learning (DL)</topic><topic>Dimensionality reduction</topic><topic>Feature extraction</topic><topic>hyperspectral image (HSI) classification</topic><topic>Hyperspectral imaging</topic><topic>Image classification</topic><topic>IP networks</topic><topic>Machine learning</topic><topic>Methods</topic><topic>Modules</topic><topic>Neural networks</topic><topic>Pixels</topic><topic>Principal components analysis</topic><topic>Reduction</topic><topic>Semantics</topic><topic>Similarity</topic><topic>Solid modeling</topic><topic>Spatial resolution</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Shichao</creatorcontrib><creatorcontrib>Zhang, Jiahua</creatorcontrib><creatorcontrib>Xun, Lan</creatorcontrib><creatorcontrib>Wang, Jingwen</creatorcontrib><creatorcontrib>Zhang, Da</creatorcontrib><creatorcontrib>Wu, Zhenjiang</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Meteorological & Geoastrophysical Abstracts</collection><collection>Water Resources Abstracts</collection><collection>Technology Research Database</collection><collection>Environmental Sciences and Pollution Management</collection><collection>ASFA: Aquatic Sciences and Fisheries Abstracts</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>Aquatic Science & Fisheries Abstracts (ASFA) 2: Ocean Technology, Policy & Non-Living Resources</collection><collection>ProQuest Computer Science Collection</collection><collection>Meteorological & Geoastrophysical Abstracts - Academic</collection><collection>Civil Engineering Abstracts</collection><collection>Aquatic Science & Fisheries Abstracts (ASFA) Professional</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE geoscience and remote sensing letters</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhang, Shichao</au><au>Zhang, Jiahua</au><au>Xun, Lan</au><au>Wang, Jingwen</au><au>Zhang, Da</au><au>Wu, Zhenjiang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>AMFAN: Adaptive Multiscale Feature Attention Network for Hyperspectral Image Classification</atitle><jtitle>IEEE geoscience and remote sensing letters</jtitle><stitle>LGRS</stitle><date>2022</date><risdate>2022</risdate><volume>19</volume><spage>1</spage><epage>5</epage><pages>1-5</pages><issn>1545-598X</issn><eissn>1558-0571</eissn><coden>IGRSBY</coden><abstract>Recently, convolutional neural networks (CNNs) have been widely used in hyperspectral image (HSI) classification with appreciable performance. However, the current CNN-based HSI classification methods have limitations in exploiting the multiscale features and extracting sufficiently discriminative features, and usually, adopted dimensionality reduction method such as principal component analysis (PCA) leads to some or all of the physical information of the original band may be lost. To address the above problems, in this letter, we propose an adaptive multiscale feature attention network (AMFAN) for HSI classification. First, we use a band selection algorithm to perform data dimensionality reduction, which helps maintain the original characteristics of the image. Second, different from existing multiscale feature extraction methods that give features of different scales the same degree of importance, we propose an adaptive multiscale feature residual module (AMFRM) to give multiscale features different importance. Finally, due to the input of the HSI classification model based on deep learning (DL) being the patch cube, the only available initial information is the category of the center pixel. However, the patch often contains pixels different from the center pixel category, and existing attention mechanisms do not consider the impact of such pixels on the HSI classification, so we design a novel position attention module (PAM) to calculate the similarity between the center (target) pixel and surrounding pixels and then pay more attention to the pixels with high similarity to the center pixel. Besides, we also use a spectral attention module (SAM) to obtain more discriminative spectral features. Experimental results show that the proposed AMFAN effectively improves the classification accuracy and outperforms the state-of-the-art CNNs.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/LGRS.2022.3193488</doi><tpages>5</tpages><orcidid>https://orcid.org/0000-0001-6329-5537</orcidid><orcidid>https://orcid.org/0000-0002-2894-9627</orcidid><orcidid>https://orcid.org/0000-0002-5972-5474</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1545-598X |
ispartof | IEEE geoscience and remote sensing letters, 2022, Vol.19, p.1-5 |
issn | 1545-598X 1558-0571 |
language | eng |
recordid | cdi_crossref_primary_10_1109_LGRS_2022_3193488 |
source | IEEE Electronic Library (IEL) |
subjects | Adaptive multiscale feature attention network (AMFAN) Algorithms Artificial neural networks band selection Classification Classification algorithms Deep learning deep learning (DL) Dimensionality reduction Feature extraction hyperspectral image (HSI) classification Hyperspectral imaging Image classification IP networks Machine learning Methods Modules Neural networks Pixels Principal components analysis Reduction Semantics Similarity Solid modeling Spatial resolution |
title | AMFAN: Adaptive Multiscale Feature Attention Network for Hyperspectral Image Classification |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-04T01%3A29%3A33IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=AMFAN:%20Adaptive%20Multiscale%20Feature%20Attention%20Network%20for%20Hyperspectral%20Image%20Classification&rft.jtitle=IEEE%20geoscience%20and%20remote%20sensing%20letters&rft.au=Zhang,%20Shichao&rft.date=2022&rft.volume=19&rft.spage=1&rft.epage=5&rft.pages=1-5&rft.issn=1545-598X&rft.eissn=1558-0571&rft.coden=IGRSBY&rft_id=info:doi/10.1109/LGRS.2022.3193488&rft_dat=%3Cproquest_RIE%3E2697563403%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2697563403&rft_id=info:pmid/&rft_ieee_id=9837928&rfr_iscdi=true |