Advancing Hyperspectral and Multispectral Image Fusion: An Information-Aware Transformer-Based Unfolding Network

In hyperspectral image (HSI) processing, the fusion of the high-resolution multispectral image (HR-MSI) and the low-resolution HSI (LR-HSI) on the same scene, known as MSI-HSI fusion, is a crucial step in obtaining the desired high-resolution HSI (HR-HSI). With the powerful representation ability, c...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transaction on neural networks and learning systems 2024-05, Vol.PP, p.1-15
Hauptverfasser: Sun, Jianqiao, Chen, Bo, Lu, Ruiying, Cheng, Ziheng, Qu, Chunhui, Yuan, Xin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 15
container_issue
container_start_page 1
container_title IEEE transaction on neural networks and learning systems
container_volume PP
creator Sun, Jianqiao
Chen, Bo
Lu, Ruiying
Cheng, Ziheng
Qu, Chunhui
Yuan, Xin
description In hyperspectral image (HSI) processing, the fusion of the high-resolution multispectral image (HR-MSI) and the low-resolution HSI (LR-HSI) on the same scene, known as MSI-HSI fusion, is a crucial step in obtaining the desired high-resolution HSI (HR-HSI). With the powerful representation ability, convolutional neural network (CNN)-based deep unfolding methods have demonstrated promising performances. However, limited receptive fields of CNN often lead to inaccurate long-range spatial features, and inherent input and output images for each stage in unfolding networks restrict the feature transmission, thus limiting the overall performance. To this end, we propose a novel and efficient information-aware transformer-based unfolding network (ITU-Net) to model the long-range dependencies and transfer more information across the stages. Specifically, we employ a customized transformer block to learn representations from both the spatial and frequency domains as well as avoid the quadratic complexity with respect to the input length. For spatial feature extractions, we develop an information transfer guided linearized attention (ITLA), which transmits high-throughput information between adjacent stages and extracts contextual features along the spatial dimension in linear complexity. Moreover, we introduce frequency domain learning in the feedforward network (FFN) to capture token variations of the image and narrow the frequency gap. Via integrating our proposed transformer blocks with the unfolding framework, our ITU-Net achieves state-of-the-art (SOTA) performance on both synthetic and real hyperspectral datasets.
doi_str_mv 10.1109/TNNLS.2024.3400809
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_pubmed_primary_38776209</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10536168</ieee_id><sourcerecordid>3059259208</sourcerecordid><originalsourceid>FETCH-LOGICAL-c275t-c8c641d71415acc8234a0a9022e073701b36f683c5af8a4a7cb23d949d2f2d9e3</originalsourceid><addsrcrecordid>eNpNkNtKAzEQhoMoKrUvICJ76c3WHPaQeFeL2kKtF1bwbpkms7K6J5NdpW9vamsxDCQzfPMTPkLOGR0xRtX1crGYP4845dFIRJRKqg7IKWcJD7mQ8nD_Tl9PyNC5d-pPQuMkUsfkRMg0TThVp6Qdmy-odVG_BdN1i9a1qDsLZQC1CR77siv2k1kFbxjc965o6ptgXAezOm9sBZ3vw_E3WAyWFmq3GaINb8GhCV48U5pN_AK778Z-nJGjHEqHw909IC_3d8vJNJw_Pcwm43moeRp3oZY6iZhJWcRi0FpyEQEFRTlHmoqUspVI8kQKHUMuIYJUr7gwKlKG59woFANytc1tbfPZo-uyqnAayxJqbHqXCRor7otKj_Itqm3jnMU8a21RgV1njGYb2dmv7GwjO9vJ9kuXu_x-VaHZr_yp9cDFFigQ8V9iLBLmP_4DS-2EkQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3059259208</pqid></control><display><type>article</type><title>Advancing Hyperspectral and Multispectral Image Fusion: An Information-Aware Transformer-Based Unfolding Network</title><source>IEEE Electronic Library (IEL)</source><creator>Sun, Jianqiao ; Chen, Bo ; Lu, Ruiying ; Cheng, Ziheng ; Qu, Chunhui ; Yuan, Xin</creator><creatorcontrib>Sun, Jianqiao ; Chen, Bo ; Lu, Ruiying ; Cheng, Ziheng ; Qu, Chunhui ; Yuan, Xin</creatorcontrib><description>In hyperspectral image (HSI) processing, the fusion of the high-resolution multispectral image (HR-MSI) and the low-resolution HSI (LR-HSI) on the same scene, known as MSI-HSI fusion, is a crucial step in obtaining the desired high-resolution HSI (HR-HSI). With the powerful representation ability, convolutional neural network (CNN)-based deep unfolding methods have demonstrated promising performances. However, limited receptive fields of CNN often lead to inaccurate long-range spatial features, and inherent input and output images for each stage in unfolding networks restrict the feature transmission, thus limiting the overall performance. To this end, we propose a novel and efficient information-aware transformer-based unfolding network (ITU-Net) to model the long-range dependencies and transfer more information across the stages. Specifically, we employ a customized transformer block to learn representations from both the spatial and frequency domains as well as avoid the quadratic complexity with respect to the input length. For spatial feature extractions, we develop an information transfer guided linearized attention (ITLA), which transmits high-throughput information between adjacent stages and extracts contextual features along the spatial dimension in linear complexity. Moreover, we introduce frequency domain learning in the feedforward network (FFN) to capture token variations of the image and narrow the frequency gap. Via integrating our proposed transformer blocks with the unfolding framework, our ITU-Net achieves state-of-the-art (SOTA) performance on both synthetic and real hyperspectral datasets.</description><identifier>ISSN: 2162-237X</identifier><identifier>EISSN: 2162-2388</identifier><identifier>DOI: 10.1109/TNNLS.2024.3400809</identifier><identifier>PMID: 38776209</identifier><identifier>CODEN: ITNNAL</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Attention mechanism ; Feature extraction ; Frequency-domain analysis ; Hyperspectral imaging ; image fusion ; Image reconstruction ; Spatial resolution ; Task analysis ; Transformers ; vision transformers (ViTs)</subject><ispartof>IEEE transaction on neural networks and learning systems, 2024-05, Vol.PP, p.1-15</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed><orcidid>0000-0001-5151-9388 ; 0000-0002-8825-6064 ; 0000-0002-8311-7524 ; 0000-0002-7504-197X ; 0000-0002-8193-7940</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10536168$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10536168$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/38776209$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Sun, Jianqiao</creatorcontrib><creatorcontrib>Chen, Bo</creatorcontrib><creatorcontrib>Lu, Ruiying</creatorcontrib><creatorcontrib>Cheng, Ziheng</creatorcontrib><creatorcontrib>Qu, Chunhui</creatorcontrib><creatorcontrib>Yuan, Xin</creatorcontrib><title>Advancing Hyperspectral and Multispectral Image Fusion: An Information-Aware Transformer-Based Unfolding Network</title><title>IEEE transaction on neural networks and learning systems</title><addtitle>TNNLS</addtitle><addtitle>IEEE Trans Neural Netw Learn Syst</addtitle><description>In hyperspectral image (HSI) processing, the fusion of the high-resolution multispectral image (HR-MSI) and the low-resolution HSI (LR-HSI) on the same scene, known as MSI-HSI fusion, is a crucial step in obtaining the desired high-resolution HSI (HR-HSI). With the powerful representation ability, convolutional neural network (CNN)-based deep unfolding methods have demonstrated promising performances. However, limited receptive fields of CNN often lead to inaccurate long-range spatial features, and inherent input and output images for each stage in unfolding networks restrict the feature transmission, thus limiting the overall performance. To this end, we propose a novel and efficient information-aware transformer-based unfolding network (ITU-Net) to model the long-range dependencies and transfer more information across the stages. Specifically, we employ a customized transformer block to learn representations from both the spatial and frequency domains as well as avoid the quadratic complexity with respect to the input length. For spatial feature extractions, we develop an information transfer guided linearized attention (ITLA), which transmits high-throughput information between adjacent stages and extracts contextual features along the spatial dimension in linear complexity. Moreover, we introduce frequency domain learning in the feedforward network (FFN) to capture token variations of the image and narrow the frequency gap. Via integrating our proposed transformer blocks with the unfolding framework, our ITU-Net achieves state-of-the-art (SOTA) performance on both synthetic and real hyperspectral datasets.</description><subject>Attention mechanism</subject><subject>Feature extraction</subject><subject>Frequency-domain analysis</subject><subject>Hyperspectral imaging</subject><subject>image fusion</subject><subject>Image reconstruction</subject><subject>Spatial resolution</subject><subject>Task analysis</subject><subject>Transformers</subject><subject>vision transformers (ViTs)</subject><issn>2162-237X</issn><issn>2162-2388</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkNtKAzEQhoMoKrUvICJ76c3WHPaQeFeL2kKtF1bwbpkms7K6J5NdpW9vamsxDCQzfPMTPkLOGR0xRtX1crGYP4845dFIRJRKqg7IKWcJD7mQ8nD_Tl9PyNC5d-pPQuMkUsfkRMg0TThVp6Qdmy-odVG_BdN1i9a1qDsLZQC1CR77siv2k1kFbxjc965o6ptgXAezOm9sBZ3vw_E3WAyWFmq3GaINb8GhCV48U5pN_AK778Z-nJGjHEqHw909IC_3d8vJNJw_Pcwm43moeRp3oZY6iZhJWcRi0FpyEQEFRTlHmoqUspVI8kQKHUMuIYJUr7gwKlKG59woFANytc1tbfPZo-uyqnAayxJqbHqXCRor7otKj_Itqm3jnMU8a21RgV1njGYb2dmv7GwjO9vJ9kuXu_x-VaHZr_yp9cDFFigQ8V9iLBLmP_4DS-2EkQ</recordid><startdate>20240522</startdate><enddate>20240522</enddate><creator>Sun, Jianqiao</creator><creator>Chen, Bo</creator><creator>Lu, Ruiying</creator><creator>Cheng, Ziheng</creator><creator>Qu, Chunhui</creator><creator>Yuan, Xin</creator><general>IEEE</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0001-5151-9388</orcidid><orcidid>https://orcid.org/0000-0002-8825-6064</orcidid><orcidid>https://orcid.org/0000-0002-8311-7524</orcidid><orcidid>https://orcid.org/0000-0002-7504-197X</orcidid><orcidid>https://orcid.org/0000-0002-8193-7940</orcidid></search><sort><creationdate>20240522</creationdate><title>Advancing Hyperspectral and Multispectral Image Fusion: An Information-Aware Transformer-Based Unfolding Network</title><author>Sun, Jianqiao ; Chen, Bo ; Lu, Ruiying ; Cheng, Ziheng ; Qu, Chunhui ; Yuan, Xin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c275t-c8c641d71415acc8234a0a9022e073701b36f683c5af8a4a7cb23d949d2f2d9e3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Attention mechanism</topic><topic>Feature extraction</topic><topic>Frequency-domain analysis</topic><topic>Hyperspectral imaging</topic><topic>image fusion</topic><topic>Image reconstruction</topic><topic>Spatial resolution</topic><topic>Task analysis</topic><topic>Transformers</topic><topic>vision transformers (ViTs)</topic><toplevel>online_resources</toplevel><creatorcontrib>Sun, Jianqiao</creatorcontrib><creatorcontrib>Chen, Bo</creatorcontrib><creatorcontrib>Lu, Ruiying</creatorcontrib><creatorcontrib>Cheng, Ziheng</creatorcontrib><creatorcontrib>Qu, Chunhui</creatorcontrib><creatorcontrib>Yuan, Xin</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998–Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transaction on neural networks and learning systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Sun, Jianqiao</au><au>Chen, Bo</au><au>Lu, Ruiying</au><au>Cheng, Ziheng</au><au>Qu, Chunhui</au><au>Yuan, Xin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Advancing Hyperspectral and Multispectral Image Fusion: An Information-Aware Transformer-Based Unfolding Network</atitle><jtitle>IEEE transaction on neural networks and learning systems</jtitle><stitle>TNNLS</stitle><addtitle>IEEE Trans Neural Netw Learn Syst</addtitle><date>2024-05-22</date><risdate>2024</risdate><volume>PP</volume><spage>1</spage><epage>15</epage><pages>1-15</pages><issn>2162-237X</issn><eissn>2162-2388</eissn><coden>ITNNAL</coden><abstract>In hyperspectral image (HSI) processing, the fusion of the high-resolution multispectral image (HR-MSI) and the low-resolution HSI (LR-HSI) on the same scene, known as MSI-HSI fusion, is a crucial step in obtaining the desired high-resolution HSI (HR-HSI). With the powerful representation ability, convolutional neural network (CNN)-based deep unfolding methods have demonstrated promising performances. However, limited receptive fields of CNN often lead to inaccurate long-range spatial features, and inherent input and output images for each stage in unfolding networks restrict the feature transmission, thus limiting the overall performance. To this end, we propose a novel and efficient information-aware transformer-based unfolding network (ITU-Net) to model the long-range dependencies and transfer more information across the stages. Specifically, we employ a customized transformer block to learn representations from both the spatial and frequency domains as well as avoid the quadratic complexity with respect to the input length. For spatial feature extractions, we develop an information transfer guided linearized attention (ITLA), which transmits high-throughput information between adjacent stages and extracts contextual features along the spatial dimension in linear complexity. Moreover, we introduce frequency domain learning in the feedforward network (FFN) to capture token variations of the image and narrow the frequency gap. Via integrating our proposed transformer blocks with the unfolding framework, our ITU-Net achieves state-of-the-art (SOTA) performance on both synthetic and real hyperspectral datasets.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>38776209</pmid><doi>10.1109/TNNLS.2024.3400809</doi><tpages>15</tpages><orcidid>https://orcid.org/0000-0001-5151-9388</orcidid><orcidid>https://orcid.org/0000-0002-8825-6064</orcidid><orcidid>https://orcid.org/0000-0002-8311-7524</orcidid><orcidid>https://orcid.org/0000-0002-7504-197X</orcidid><orcidid>https://orcid.org/0000-0002-8193-7940</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 2162-237X
ispartof IEEE transaction on neural networks and learning systems, 2024-05, Vol.PP, p.1-15
issn 2162-237X
2162-2388
language eng
recordid cdi_pubmed_primary_38776209
source IEEE Electronic Library (IEL)
subjects Attention mechanism
Feature extraction
Frequency-domain analysis
Hyperspectral imaging
image fusion
Image reconstruction
Spatial resolution
Task analysis
Transformers
vision transformers (ViTs)
title Advancing Hyperspectral and Multispectral Image Fusion: An Information-Aware Transformer-Based Unfolding Network
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-01T17%3A20%3A10IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Advancing%20Hyperspectral%20and%20Multispectral%20Image%20Fusion:%20An%20Information-Aware%20Transformer-Based%20Unfolding%20Network&rft.jtitle=IEEE%20transaction%20on%20neural%20networks%20and%20learning%20systems&rft.au=Sun,%20Jianqiao&rft.date=2024-05-22&rft.volume=PP&rft.spage=1&rft.epage=15&rft.pages=1-15&rft.issn=2162-237X&rft.eissn=2162-2388&rft.coden=ITNNAL&rft_id=info:doi/10.1109/TNNLS.2024.3400809&rft_dat=%3Cproquest_RIE%3E3059259208%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3059259208&rft_id=info:pmid/38776209&rft_ieee_id=10536168&rfr_iscdi=true