Dual-Branch Grouping Multiscale Residual Embedding U-Net and Cross-Attention Fusion Networks for Hyperspectral Image Classification
Due to the high cost and time-consuming nature of acquiring labelled samples of hyperspectral data, classification of hyperspectral images with a small number of training samples has been an urgent problem. In recent years, U-Net can train the characteristics of high-precision models with a small am...
Gespeichert in:
Veröffentlicht in: | International journal of advanced computer science & applications 2024, Vol.15 (1) |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | 1 |
container_start_page | |
container_title | International journal of advanced computer science & applications |
container_volume | 15 |
creator | Ouyang, Ning Huang, Chenyu Lin, Leping |
description | Due to the high cost and time-consuming nature of acquiring labelled samples of hyperspectral data, classification of hyperspectral images with a small number of training samples has been an urgent problem. In recent years, U-Net can train the characteristics of high-precision models with a small amount of data, showing its good performance in small samples. To this end, this paper proposes a dual-branch grouping multiscale residual embedding U-Net and cross-attention fusion networks (DGMRU_CAF) for hyperspectral image classification is proposed. The network contains two branches, spatial GMRU and spectral GMRU, which can reduce the interference between the two types of features, spatial and spectral. In this case, each branch introduces U-Net and designs a grouped multiscale residual block (GMR), which can be used in spatial GMRUs to compensate for the loss of feature information caused by spatial features during down-sampling, and in spectral GMRUs to solve the problem of redundancy in spectral dimensions. Considering the effective fusion of spatial and spectral features between the two branches, the spatial-spectral cross-attention fusion (SSCAF) module is designed to enable the interactive fusion of spatial-spectral features. Experimental results on WHU-Hi-HanChuan and Pavia Center datasets shows the superiority of the method proposed in this paper. |
doi_str_mv | 10.14569/IJACSA.2024.0150160 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2931756305</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2931756305</sourcerecordid><originalsourceid>FETCH-LOGICAL-c274t-d7d40846d27dafd657c3acaec4683b1212c15df27e1c71cb7cd36fc35efa08c33</originalsourceid><addsrcrecordid>eNotUMtOwzAQtBBIVKV_wMESZxc_Yjs9htAXKiABlbhZru2UlDQJdiLUMz9O0nYvs9LMzmoGgFuCxyTiYnK_fErS92RMMY3GmHBMBL4AA0q4QJxLfHncY0Sw_LwGoxB2uBs2oSJmA_D32OoCPXhdmi8491Vb5-UWPrdFkwejCwffXMhtp4HT_cZZ27Nr9OIaqEsLU1-FgJKmcWWTVyWctaGHjv6t_HeAWeXh4lA7H2pnGt-5LPd662Ba6BDyLDe6P7sBV5kughudcQjWs-lHukCr1_kyTVbIUBk1yEob4TgSlkqrMyu4NEwb7UzUJdkQSqgh3GZUOmIkMRtpLBOZYdxlGseGsSG4O_nWvvppXWjUrmp92b1UdMKI5IJh3qmik8r04bzLVO3zvfYHRbA6Nq5Ojau-cXVunP0DtQ52iQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2931756305</pqid></control><display><type>article</type><title>Dual-Branch Grouping Multiscale Residual Embedding U-Net and Cross-Attention Fusion Networks for Hyperspectral Image Classification</title><source>EZB Electronic Journals Library</source><creator>Ouyang, Ning ; Huang, Chenyu ; Lin, Leping</creator><creatorcontrib>Ouyang, Ning ; Huang, Chenyu ; Lin, Leping</creatorcontrib><description>Due to the high cost and time-consuming nature of acquiring labelled samples of hyperspectral data, classification of hyperspectral images with a small number of training samples has been an urgent problem. In recent years, U-Net can train the characteristics of high-precision models with a small amount of data, showing its good performance in small samples. To this end, this paper proposes a dual-branch grouping multiscale residual embedding U-Net and cross-attention fusion networks (DGMRU_CAF) for hyperspectral image classification is proposed. The network contains two branches, spatial GMRU and spectral GMRU, which can reduce the interference between the two types of features, spatial and spectral. In this case, each branch introduces U-Net and designs a grouped multiscale residual block (GMR), which can be used in spatial GMRUs to compensate for the loss of feature information caused by spatial features during down-sampling, and in spectral GMRUs to solve the problem of redundancy in spectral dimensions. Considering the effective fusion of spatial and spectral features between the two branches, the spatial-spectral cross-attention fusion (SSCAF) module is designed to enable the interactive fusion of spatial-spectral features. Experimental results on WHU-Hi-HanChuan and Pavia Center datasets shows the superiority of the method proposed in this paper.</description><identifier>ISSN: 2158-107X</identifier><identifier>EISSN: 2156-5570</identifier><identifier>DOI: 10.14569/IJACSA.2024.0150160</identifier><language>eng</language><publisher>West Yorkshire: Science and Information (SAI) Organization Limited</publisher><subject>Classification ; Computer science ; Design ; Embedding ; Hyperspectral imaging ; Image classification ; Redundancy</subject><ispartof>International journal of advanced computer science & applications, 2024, Vol.15 (1)</ispartof><rights>2024. This work is licensed under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,4024,27923,27924,27925</link.rule.ids></links><search><creatorcontrib>Ouyang, Ning</creatorcontrib><creatorcontrib>Huang, Chenyu</creatorcontrib><creatorcontrib>Lin, Leping</creatorcontrib><title>Dual-Branch Grouping Multiscale Residual Embedding U-Net and Cross-Attention Fusion Networks for Hyperspectral Image Classification</title><title>International journal of advanced computer science & applications</title><description>Due to the high cost and time-consuming nature of acquiring labelled samples of hyperspectral data, classification of hyperspectral images with a small number of training samples has been an urgent problem. In recent years, U-Net can train the characteristics of high-precision models with a small amount of data, showing its good performance in small samples. To this end, this paper proposes a dual-branch grouping multiscale residual embedding U-Net and cross-attention fusion networks (DGMRU_CAF) for hyperspectral image classification is proposed. The network contains two branches, spatial GMRU and spectral GMRU, which can reduce the interference between the two types of features, spatial and spectral. In this case, each branch introduces U-Net and designs a grouped multiscale residual block (GMR), which can be used in spatial GMRUs to compensate for the loss of feature information caused by spatial features during down-sampling, and in spectral GMRUs to solve the problem of redundancy in spectral dimensions. Considering the effective fusion of spatial and spectral features between the two branches, the spatial-spectral cross-attention fusion (SSCAF) module is designed to enable the interactive fusion of spatial-spectral features. Experimental results on WHU-Hi-HanChuan and Pavia Center datasets shows the superiority of the method proposed in this paper.</description><subject>Classification</subject><subject>Computer science</subject><subject>Design</subject><subject>Embedding</subject><subject>Hyperspectral imaging</subject><subject>Image classification</subject><subject>Redundancy</subject><issn>2158-107X</issn><issn>2156-5570</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>8G5</sourceid><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><sourceid>GUQSH</sourceid><sourceid>M2O</sourceid><recordid>eNotUMtOwzAQtBBIVKV_wMESZxc_Yjs9htAXKiABlbhZru2UlDQJdiLUMz9O0nYvs9LMzmoGgFuCxyTiYnK_fErS92RMMY3GmHBMBL4AA0q4QJxLfHncY0Sw_LwGoxB2uBs2oSJmA_D32OoCPXhdmi8491Vb5-UWPrdFkwejCwffXMhtp4HT_cZZ27Nr9OIaqEsLU1-FgJKmcWWTVyWctaGHjv6t_HeAWeXh4lA7H2pnGt-5LPd662Ba6BDyLDe6P7sBV5kughudcQjWs-lHukCr1_kyTVbIUBk1yEob4TgSlkqrMyu4NEwb7UzUJdkQSqgh3GZUOmIkMRtpLBOZYdxlGseGsSG4O_nWvvppXWjUrmp92b1UdMKI5IJh3qmik8r04bzLVO3zvfYHRbA6Nq5Ojau-cXVunP0DtQ52iQ</recordid><startdate>2024</startdate><enddate>2024</enddate><creator>Ouyang, Ning</creator><creator>Huang, Chenyu</creator><creator>Lin, Leping</creator><general>Science and Information (SAI) Organization Limited</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7XB</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8G5</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>GNUQQ</scope><scope>GUQSH</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>M2O</scope><scope>MBDVC</scope><scope>P5Z</scope><scope>P62</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>Q9U</scope></search><sort><creationdate>2024</creationdate><title>Dual-Branch Grouping Multiscale Residual Embedding U-Net and Cross-Attention Fusion Networks for Hyperspectral Image Classification</title><author>Ouyang, Ning ; Huang, Chenyu ; Lin, Leping</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c274t-d7d40846d27dafd657c3acaec4683b1212c15df27e1c71cb7cd36fc35efa08c33</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Classification</topic><topic>Computer science</topic><topic>Design</topic><topic>Embedding</topic><topic>Hyperspectral imaging</topic><topic>Image classification</topic><topic>Redundancy</topic><toplevel>online_resources</toplevel><creatorcontrib>Ouyang, Ning</creatorcontrib><creatorcontrib>Huang, Chenyu</creatorcontrib><creatorcontrib>Lin, Leping</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>Research Library (Alumni Edition)</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Database (1962 - current)</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>ProQuest Central Student</collection><collection>Research Library Prep</collection><collection>SciTech Premium Collection (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>ProQuest research library</collection><collection>Research Library (Corporate)</collection><collection>ProQuest Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ProQuest Central Basic</collection><jtitle>International journal of advanced computer science & applications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Ouyang, Ning</au><au>Huang, Chenyu</au><au>Lin, Leping</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Dual-Branch Grouping Multiscale Residual Embedding U-Net and Cross-Attention Fusion Networks for Hyperspectral Image Classification</atitle><jtitle>International journal of advanced computer science & applications</jtitle><date>2024</date><risdate>2024</risdate><volume>15</volume><issue>1</issue><issn>2158-107X</issn><eissn>2156-5570</eissn><abstract>Due to the high cost and time-consuming nature of acquiring labelled samples of hyperspectral data, classification of hyperspectral images with a small number of training samples has been an urgent problem. In recent years, U-Net can train the characteristics of high-precision models with a small amount of data, showing its good performance in small samples. To this end, this paper proposes a dual-branch grouping multiscale residual embedding U-Net and cross-attention fusion networks (DGMRU_CAF) for hyperspectral image classification is proposed. The network contains two branches, spatial GMRU and spectral GMRU, which can reduce the interference between the two types of features, spatial and spectral. In this case, each branch introduces U-Net and designs a grouped multiscale residual block (GMR), which can be used in spatial GMRUs to compensate for the loss of feature information caused by spatial features during down-sampling, and in spectral GMRUs to solve the problem of redundancy in spectral dimensions. Considering the effective fusion of spatial and spectral features between the two branches, the spatial-spectral cross-attention fusion (SSCAF) module is designed to enable the interactive fusion of spatial-spectral features. Experimental results on WHU-Hi-HanChuan and Pavia Center datasets shows the superiority of the method proposed in this paper.</abstract><cop>West Yorkshire</cop><pub>Science and Information (SAI) Organization Limited</pub><doi>10.14569/IJACSA.2024.0150160</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2158-107X |
ispartof | International journal of advanced computer science & applications, 2024, Vol.15 (1) |
issn | 2158-107X 2156-5570 |
language | eng |
recordid | cdi_proquest_journals_2931756305 |
source | EZB Electronic Journals Library |
subjects | Classification Computer science Design Embedding Hyperspectral imaging Image classification Redundancy |
title | Dual-Branch Grouping Multiscale Residual Embedding U-Net and Cross-Attention Fusion Networks for Hyperspectral Image Classification |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T18%3A44%3A48IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Dual-Branch%20Grouping%20Multiscale%20Residual%20Embedding%20U-Net%20and%20Cross-Attention%20Fusion%20Networks%20for%20Hyperspectral%20Image%20Classification&rft.jtitle=International%20journal%20of%20advanced%20computer%20science%20&%20applications&rft.au=Ouyang,%20Ning&rft.date=2024&rft.volume=15&rft.issue=1&rft.issn=2158-107X&rft.eissn=2156-5570&rft_id=info:doi/10.14569/IJACSA.2024.0150160&rft_dat=%3Cproquest_cross%3E2931756305%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2931756305&rft_id=info:pmid/&rfr_iscdi=true |