CLRNet: A Residual Network Based on ConvLSTM for Progressive Pansharpening

In this letter, we design a progressive pansharpening network termed CLRNet, which cascades two deep residual subnets (DRNets) with the same structure and then employs these two subnets to perform progressive fusion at two scales, gradually fusing panchromatic (PAN) images and low-resolution multisp...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE geoscience and remote sensing letters 2024, Vol.21, p.1-5
Hauptverfasser: Li, Jie, Wang, Huajun, Liu, Dujin, Liu, Shujun
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 5
container_issue
container_start_page 1
container_title IEEE geoscience and remote sensing letters
container_volume 21
creator Li, Jie
Wang, Huajun
Liu, Dujin
Liu, Shujun
description In this letter, we design a progressive pansharpening network termed CLRNet, which cascades two deep residual subnets (DRNets) with the same structure and then employs these two subnets to perform progressive fusion at two scales, gradually fusing panchromatic (PAN) images and low-resolution multispectral (LRMS) images. DRNet cascades multiple convolutional long short-term memory (ConvLSTM) units that can capture the dependence relationships of hierarchical features. First, considering the sensitivity of spectral features to hierarchy and spatial features to scale, we have constructed a deep progressive pansharpening network to comprehensively represent the original information. Second, with the number of network layers increased, the high frequency of feature maps in deep networks is gradually smoothed out. Therefore, introducing residual learning into the network can enhance our attention to texture details and improve the spatial resolution of fusion results. Finally, when extracting hierarchical features from deep networks, deep feature maps have a strong dependence on shallow feature maps. We capture the differences among hierarchical features and the differences among multiscale features, obtaining rich spatial features and realistic spectral features. The proposed CLRNet method achieves a quality with no reference (QNR) of 0.926, a structural similarity index measure (SSIM) of 0.984, and the relative dimensionless global error in synthesis (ERGAS) reduced to 0.603 on GaoFen-2 datasets, leading to a significant improvement compared with other state-of-the-art (SOTA) methods.
doi_str_mv 10.1109/LGRS.2024.3412685
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_crossref_primary_10_1109_LGRS_2024_3412685</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10553650</ieee_id><sourcerecordid>3075426584</sourcerecordid><originalsourceid>FETCH-LOGICAL-c176t-58417e10ff297defdeacc437229607ba7b31ec51f0467ee6e51f181f5b028e3c3</originalsourceid><addsrcrecordid>eNpNkE1Lw0AQhhdRsFZ_gOBhwXPqfmY33mrQqkQtbQVvyzaZramarbttxX9vQnvwNO_A887Ag9A5JQNKSXZVjCbTASNMDLigLNXyAPWolDohUtHDLguZyEy_HaOTGJekJbVWPfSYF5NnWF_jIZ5ArKuN_cTt_uPDB76xESrsG5z7ZltMZ0_Y-YDHwS8CxFhvAY9tE99tWEFTN4tTdOTsZ4Sz_eyj17vbWX6fFC-jh3xYJCVV6TqRWlAFlDjHMlWBq8CWpeCKsSwlam7VnFMoJXVEpAoghTZSTZ2cE6aBl7yPLnd3V8F_byCuzdJvQtO-NJwoKVjavmgpuqPK4GMM4Mwq1F82_BpKTKfMdMpMp8zslbWdi12nBoB_vJQ8lYT_AbnTZrE</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3075426584</pqid></control><display><type>article</type><title>CLRNet: A Residual Network Based on ConvLSTM for Progressive Pansharpening</title><source>IEEE Electronic Library (IEL)</source><creator>Li, Jie ; Wang, Huajun ; Liu, Dujin ; Liu, Shujun</creator><creatorcontrib>Li, Jie ; Wang, Huajun ; Liu, Dujin ; Liu, Shujun</creatorcontrib><description>In this letter, we design a progressive pansharpening network termed CLRNet, which cascades two deep residual subnets (DRNets) with the same structure and then employs these two subnets to perform progressive fusion at two scales, gradually fusing panchromatic (PAN) images and low-resolution multispectral (LRMS) images. DRNet cascades multiple convolutional long short-term memory (ConvLSTM) units that can capture the dependence relationships of hierarchical features. First, considering the sensitivity of spectral features to hierarchy and spatial features to scale, we have constructed a deep progressive pansharpening network to comprehensively represent the original information. Second, with the number of network layers increased, the high frequency of feature maps in deep networks is gradually smoothed out. Therefore, introducing residual learning into the network can enhance our attention to texture details and improve the spatial resolution of fusion results. Finally, when extracting hierarchical features from deep networks, deep feature maps have a strong dependence on shallow feature maps. We capture the differences among hierarchical features and the differences among multiscale features, obtaining rich spatial features and realistic spectral features. The proposed CLRNet method achieves a quality with no reference (QNR) of 0.926, a structural similarity index measure (SSIM) of 0.984, and the relative dimensionless global error in synthesis (ERGAS) reduced to 0.603 on GaoFen-2 datasets, leading to a significant improvement compared with other state-of-the-art (SOTA) methods.</description><identifier>ISSN: 1545-598X</identifier><identifier>EISSN: 1558-0571</identifier><identifier>DOI: 10.1109/LGRS.2024.3412685</identifier><identifier>CODEN: IGRSBY</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Convolutional long short-term memory (ConvLSTM) ; Convolutional neural networks ; Feature extraction ; Feature maps ; Frequency dependence ; hierarchical features ; High frequency ; Image resolution ; Indexes ; Logic gates ; Long short-term memory ; Pansharpening ; progressive pansharpening ; residual network (PCDRN) ; Residual neural networks ; Spatial discrimination learning ; Spatial resolution ; Spectral sensitivity ; state transfer</subject><ispartof>IEEE geoscience and remote sensing letters, 2024, Vol.21, p.1-5</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c176t-58417e10ff297defdeacc437229607ba7b31ec51f0467ee6e51f181f5b028e3c3</cites><orcidid>0000-0001-5897-5562 ; 0009-0004-6156-1474 ; 0000-0003-4657-1477</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10553650$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,4024,27923,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10553650$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Li, Jie</creatorcontrib><creatorcontrib>Wang, Huajun</creatorcontrib><creatorcontrib>Liu, Dujin</creatorcontrib><creatorcontrib>Liu, Shujun</creatorcontrib><title>CLRNet: A Residual Network Based on ConvLSTM for Progressive Pansharpening</title><title>IEEE geoscience and remote sensing letters</title><addtitle>LGRS</addtitle><description>In this letter, we design a progressive pansharpening network termed CLRNet, which cascades two deep residual subnets (DRNets) with the same structure and then employs these two subnets to perform progressive fusion at two scales, gradually fusing panchromatic (PAN) images and low-resolution multispectral (LRMS) images. DRNet cascades multiple convolutional long short-term memory (ConvLSTM) units that can capture the dependence relationships of hierarchical features. First, considering the sensitivity of spectral features to hierarchy and spatial features to scale, we have constructed a deep progressive pansharpening network to comprehensively represent the original information. Second, with the number of network layers increased, the high frequency of feature maps in deep networks is gradually smoothed out. Therefore, introducing residual learning into the network can enhance our attention to texture details and improve the spatial resolution of fusion results. Finally, when extracting hierarchical features from deep networks, deep feature maps have a strong dependence on shallow feature maps. We capture the differences among hierarchical features and the differences among multiscale features, obtaining rich spatial features and realistic spectral features. The proposed CLRNet method achieves a quality with no reference (QNR) of 0.926, a structural similarity index measure (SSIM) of 0.984, and the relative dimensionless global error in synthesis (ERGAS) reduced to 0.603 on GaoFen-2 datasets, leading to a significant improvement compared with other state-of-the-art (SOTA) methods.</description><subject>Convolutional long short-term memory (ConvLSTM)</subject><subject>Convolutional neural networks</subject><subject>Feature extraction</subject><subject>Feature maps</subject><subject>Frequency dependence</subject><subject>hierarchical features</subject><subject>High frequency</subject><subject>Image resolution</subject><subject>Indexes</subject><subject>Logic gates</subject><subject>Long short-term memory</subject><subject>Pansharpening</subject><subject>progressive pansharpening</subject><subject>residual network (PCDRN)</subject><subject>Residual neural networks</subject><subject>Spatial discrimination learning</subject><subject>Spatial resolution</subject><subject>Spectral sensitivity</subject><subject>state transfer</subject><issn>1545-598X</issn><issn>1558-0571</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkE1Lw0AQhhdRsFZ_gOBhwXPqfmY33mrQqkQtbQVvyzaZramarbttxX9vQnvwNO_A887Ag9A5JQNKSXZVjCbTASNMDLigLNXyAPWolDohUtHDLguZyEy_HaOTGJekJbVWPfSYF5NnWF_jIZ5ArKuN_cTt_uPDB76xESrsG5z7ZltMZ0_Y-YDHwS8CxFhvAY9tE99tWEFTN4tTdOTsZ4Sz_eyj17vbWX6fFC-jh3xYJCVV6TqRWlAFlDjHMlWBq8CWpeCKsSwlam7VnFMoJXVEpAoghTZSTZ2cE6aBl7yPLnd3V8F_byCuzdJvQtO-NJwoKVjavmgpuqPK4GMM4Mwq1F82_BpKTKfMdMpMp8zslbWdi12nBoB_vJQ8lYT_AbnTZrE</recordid><startdate>2024</startdate><enddate>2024</enddate><creator>Li, Jie</creator><creator>Wang, Huajun</creator><creator>Liu, Dujin</creator><creator>Liu, Shujun</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7TG</scope><scope>7UA</scope><scope>8FD</scope><scope>C1K</scope><scope>F1W</scope><scope>FR3</scope><scope>H8D</scope><scope>H96</scope><scope>JQ2</scope><scope>KL.</scope><scope>KR7</scope><scope>L.G</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0001-5897-5562</orcidid><orcidid>https://orcid.org/0009-0004-6156-1474</orcidid><orcidid>https://orcid.org/0000-0003-4657-1477</orcidid></search><sort><creationdate>2024</creationdate><title>CLRNet: A Residual Network Based on ConvLSTM for Progressive Pansharpening</title><author>Li, Jie ; Wang, Huajun ; Liu, Dujin ; Liu, Shujun</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c176t-58417e10ff297defdeacc437229607ba7b31ec51f0467ee6e51f181f5b028e3c3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Convolutional long short-term memory (ConvLSTM)</topic><topic>Convolutional neural networks</topic><topic>Feature extraction</topic><topic>Feature maps</topic><topic>Frequency dependence</topic><topic>hierarchical features</topic><topic>High frequency</topic><topic>Image resolution</topic><topic>Indexes</topic><topic>Logic gates</topic><topic>Long short-term memory</topic><topic>Pansharpening</topic><topic>progressive pansharpening</topic><topic>residual network (PCDRN)</topic><topic>Residual neural networks</topic><topic>Spatial discrimination learning</topic><topic>Spatial resolution</topic><topic>Spectral sensitivity</topic><topic>state transfer</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Li, Jie</creatorcontrib><creatorcontrib>Wang, Huajun</creatorcontrib><creatorcontrib>Liu, Dujin</creatorcontrib><creatorcontrib>Liu, Shujun</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Meteorological &amp; Geoastrophysical Abstracts</collection><collection>Water Resources Abstracts</collection><collection>Technology Research Database</collection><collection>Environmental Sciences and Pollution Management</collection><collection>ASFA: Aquatic Sciences and Fisheries Abstracts</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>Aquatic Science &amp; Fisheries Abstracts (ASFA) 2: Ocean Technology, Policy &amp; Non-Living Resources</collection><collection>ProQuest Computer Science Collection</collection><collection>Meteorological &amp; Geoastrophysical Abstracts - Academic</collection><collection>Civil Engineering Abstracts</collection><collection>Aquatic Science &amp; Fisheries Abstracts (ASFA) Professional</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE geoscience and remote sensing letters</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Li, Jie</au><au>Wang, Huajun</au><au>Liu, Dujin</au><au>Liu, Shujun</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>CLRNet: A Residual Network Based on ConvLSTM for Progressive Pansharpening</atitle><jtitle>IEEE geoscience and remote sensing letters</jtitle><stitle>LGRS</stitle><date>2024</date><risdate>2024</risdate><volume>21</volume><spage>1</spage><epage>5</epage><pages>1-5</pages><issn>1545-598X</issn><eissn>1558-0571</eissn><coden>IGRSBY</coden><abstract>In this letter, we design a progressive pansharpening network termed CLRNet, which cascades two deep residual subnets (DRNets) with the same structure and then employs these two subnets to perform progressive fusion at two scales, gradually fusing panchromatic (PAN) images and low-resolution multispectral (LRMS) images. DRNet cascades multiple convolutional long short-term memory (ConvLSTM) units that can capture the dependence relationships of hierarchical features. First, considering the sensitivity of spectral features to hierarchy and spatial features to scale, we have constructed a deep progressive pansharpening network to comprehensively represent the original information. Second, with the number of network layers increased, the high frequency of feature maps in deep networks is gradually smoothed out. Therefore, introducing residual learning into the network can enhance our attention to texture details and improve the spatial resolution of fusion results. Finally, when extracting hierarchical features from deep networks, deep feature maps have a strong dependence on shallow feature maps. We capture the differences among hierarchical features and the differences among multiscale features, obtaining rich spatial features and realistic spectral features. The proposed CLRNet method achieves a quality with no reference (QNR) of 0.926, a structural similarity index measure (SSIM) of 0.984, and the relative dimensionless global error in synthesis (ERGAS) reduced to 0.603 on GaoFen-2 datasets, leading to a significant improvement compared with other state-of-the-art (SOTA) methods.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/LGRS.2024.3412685</doi><tpages>5</tpages><orcidid>https://orcid.org/0000-0001-5897-5562</orcidid><orcidid>https://orcid.org/0009-0004-6156-1474</orcidid><orcidid>https://orcid.org/0000-0003-4657-1477</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1545-598X
ispartof IEEE geoscience and remote sensing letters, 2024, Vol.21, p.1-5
issn 1545-598X
1558-0571
language eng
recordid cdi_crossref_primary_10_1109_LGRS_2024_3412685
source IEEE Electronic Library (IEL)
subjects Convolutional long short-term memory (ConvLSTM)
Convolutional neural networks
Feature extraction
Feature maps
Frequency dependence
hierarchical features
High frequency
Image resolution
Indexes
Logic gates
Long short-term memory
Pansharpening
progressive pansharpening
residual network (PCDRN)
Residual neural networks
Spatial discrimination learning
Spatial resolution
Spectral sensitivity
state transfer
title CLRNet: A Residual Network Based on ConvLSTM for Progressive Pansharpening
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T18%3A48%3A27IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=CLRNet:%20A%20Residual%20Network%20Based%20on%20ConvLSTM%20for%20Progressive%20Pansharpening&rft.jtitle=IEEE%20geoscience%20and%20remote%20sensing%20letters&rft.au=Li,%20Jie&rft.date=2024&rft.volume=21&rft.spage=1&rft.epage=5&rft.pages=1-5&rft.issn=1545-598X&rft.eissn=1558-0571&rft.coden=IGRSBY&rft_id=info:doi/10.1109/LGRS.2024.3412685&rft_dat=%3Cproquest_RIE%3E3075426584%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3075426584&rft_id=info:pmid/&rft_ieee_id=10553650&rfr_iscdi=true