Residual Encoder-Decoder Conditional Generative Adversarial Network for Pansharpening
Due to the limitation of the satellite sensor, it is difficult to acquire a high-resolution (HR) multispectral (HRMS) image directly. The aim of pansharpening (PNN) is to fuse the spatial in panchromatic (PAN) with the spectral information in multispectral (MS). Recently, deep learning has drawn muc...
Gespeichert in:
Veröffentlicht in: | IEEE geoscience and remote sensing letters 2020-09, Vol.17 (9), p.1573-1577 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 1577 |
---|---|
container_issue | 9 |
container_start_page | 1573 |
container_title | IEEE geoscience and remote sensing letters |
container_volume | 17 |
creator | Shao, Zhimin Lu, Zexin Ran, Maosong Fang, Leyuan Zhou, Jiliu Zhang, Yi |
description | Due to the limitation of the satellite sensor, it is difficult to acquire a high-resolution (HR) multispectral (HRMS) image directly. The aim of pansharpening (PNN) is to fuse the spatial in panchromatic (PAN) with the spectral information in multispectral (MS). Recently, deep learning has drawn much attention, and in the field of remote sensing, several pioneering attempts have been made related to PNN. However, the big size of remote sensing data will produce more training samples, which require a deeper neural network. Most current networks are relatively shallow and raise the possibility of detail loss. In this letter, we propose a residual encoder-decoder conditional generative adversarial network (RED-cGAN) for PNN to produce more details with sharpened images. The proposed method combines the idea of an autoencoder with generative adversarial network (GAN), which can effectively preserve the spatial and spectral information of the PAN and MS images simultaneously. First, the residual encoder-decoder module is adopted to extract the multiscale features from the last step to yield pansharpened images and relieve the training difficulty caused by deepening the network layers. Second, to further enhance the performance of the generator to preserve more spatial information, a conditional discriminator network with the input of PAN and MS images is proposed to encourage that the estimated MS images share the same distribution as that of the referenced HRMS images. The experiments conducted on the Worldview2 (WV2) and Worldview3 (WV3) images demonstrate that our proposed method provides better results than several state-of-the-art PNN methods. |
doi_str_mv | 10.1109/LGRS.2019.2949745 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2438765531</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8894479</ieee_id><sourcerecordid>2438765531</sourcerecordid><originalsourceid>FETCH-LOGICAL-c293t-fc8935575799cb8c971028745b47740fbb7c5c6688106627cffe6dc92fae72903</originalsourceid><addsrcrecordid>eNo9kEFLAzEQhYMoWKs_QLwseN6aZJNNcixVq1BUqgVvIZudaGrd1GRb8d-7a4unN8y8Nzw-hM4JHhGC1dVsOn8eUUzUiCqmBOMHaEA4lznmghz2M-M5V_L1GJ2ktMSYMinFAC3mkHy9MavsprGhhphfw59mk9DUvvWh6W5TaCCa1m8hG9dbiMlE360foP0O8SNzIWZPpknvJq6h8c3bKTpyZpXgbK9DtLi9eZnc5bPH6f1kPMstVUWbOytVwbngQilbSasEwVR25SsmBMOuqoTltiylJLgsqbDOQVlbRZ0BQRUuhuhy93cdw9cGUquXYRO7xklTVkhRcl6QzkV2LhtDShGcXkf_aeKPJlj39HRPT_f09J5el7nYZTwA_PulVIwJVfwCC71row</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2438765531</pqid></control><display><type>article</type><title>Residual Encoder-Decoder Conditional Generative Adversarial Network for Pansharpening</title><source>IEEE Electronic Library (IEL)</source><creator>Shao, Zhimin ; Lu, Zexin ; Ran, Maosong ; Fang, Leyuan ; Zhou, Jiliu ; Zhang, Yi</creator><creatorcontrib>Shao, Zhimin ; Lu, Zexin ; Ran, Maosong ; Fang, Leyuan ; Zhou, Jiliu ; Zhang, Yi</creatorcontrib><description>Due to the limitation of the satellite sensor, it is difficult to acquire a high-resolution (HR) multispectral (HRMS) image directly. The aim of pansharpening (PNN) is to fuse the spatial in panchromatic (PAN) with the spectral information in multispectral (MS). Recently, deep learning has drawn much attention, and in the field of remote sensing, several pioneering attempts have been made related to PNN. However, the big size of remote sensing data will produce more training samples, which require a deeper neural network. Most current networks are relatively shallow and raise the possibility of detail loss. In this letter, we propose a residual encoder-decoder conditional generative adversarial network (RED-cGAN) for PNN to produce more details with sharpened images. The proposed method combines the idea of an autoencoder with generative adversarial network (GAN), which can effectively preserve the spatial and spectral information of the PAN and MS images simultaneously. First, the residual encoder-decoder module is adopted to extract the multiscale features from the last step to yield pansharpened images and relieve the training difficulty caused by deepening the network layers. Second, to further enhance the performance of the generator to preserve more spatial information, a conditional discriminator network with the input of PAN and MS images is proposed to encourage that the estimated MS images share the same distribution as that of the referenced HRMS images. The experiments conducted on the Worldview2 (WV2) and Worldview3 (WV3) images demonstrate that our proposed method provides better results than several state-of-the-art PNN methods.</description><identifier>ISSN: 1545-598X</identifier><identifier>EISSN: 1558-0571</identifier><identifier>DOI: 10.1109/LGRS.2019.2949745</identifier><identifier>CODEN: IGRSBY</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Coders ; Decoding ; Deep learning ; Feature extraction ; Gallium nitride ; generative adversarial network (GAN) ; Generative adversarial networks ; Generators ; Image acquisition ; Image resolution ; Machine learning ; multispectral (MS) image ; Neural networks ; panchromatic (PAN) ; pansharpening (PNN) ; Remote sensing ; Satellites ; Spatial data ; Task analysis ; Training</subject><ispartof>IEEE geoscience and remote sensing letters, 2020-09, Vol.17 (9), p.1573-1577</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c293t-fc8935575799cb8c971028745b47740fbb7c5c6688106627cffe6dc92fae72903</citedby><cites>FETCH-LOGICAL-c293t-fc8935575799cb8c971028745b47740fbb7c5c6688106627cffe6dc92fae72903</cites><orcidid>0000-0003-2351-4461 ; 0000-0001-7201-2092</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8894479$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,777,781,793,27905,27906,54739</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/8894479$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Shao, Zhimin</creatorcontrib><creatorcontrib>Lu, Zexin</creatorcontrib><creatorcontrib>Ran, Maosong</creatorcontrib><creatorcontrib>Fang, Leyuan</creatorcontrib><creatorcontrib>Zhou, Jiliu</creatorcontrib><creatorcontrib>Zhang, Yi</creatorcontrib><title>Residual Encoder-Decoder Conditional Generative Adversarial Network for Pansharpening</title><title>IEEE geoscience and remote sensing letters</title><addtitle>LGRS</addtitle><description>Due to the limitation of the satellite sensor, it is difficult to acquire a high-resolution (HR) multispectral (HRMS) image directly. The aim of pansharpening (PNN) is to fuse the spatial in panchromatic (PAN) with the spectral information in multispectral (MS). Recently, deep learning has drawn much attention, and in the field of remote sensing, several pioneering attempts have been made related to PNN. However, the big size of remote sensing data will produce more training samples, which require a deeper neural network. Most current networks are relatively shallow and raise the possibility of detail loss. In this letter, we propose a residual encoder-decoder conditional generative adversarial network (RED-cGAN) for PNN to produce more details with sharpened images. The proposed method combines the idea of an autoencoder with generative adversarial network (GAN), which can effectively preserve the spatial and spectral information of the PAN and MS images simultaneously. First, the residual encoder-decoder module is adopted to extract the multiscale features from the last step to yield pansharpened images and relieve the training difficulty caused by deepening the network layers. Second, to further enhance the performance of the generator to preserve more spatial information, a conditional discriminator network with the input of PAN and MS images is proposed to encourage that the estimated MS images share the same distribution as that of the referenced HRMS images. The experiments conducted on the Worldview2 (WV2) and Worldview3 (WV3) images demonstrate that our proposed method provides better results than several state-of-the-art PNN methods.</description><subject>Coders</subject><subject>Decoding</subject><subject>Deep learning</subject><subject>Feature extraction</subject><subject>Gallium nitride</subject><subject>generative adversarial network (GAN)</subject><subject>Generative adversarial networks</subject><subject>Generators</subject><subject>Image acquisition</subject><subject>Image resolution</subject><subject>Machine learning</subject><subject>multispectral (MS) image</subject><subject>Neural networks</subject><subject>panchromatic (PAN)</subject><subject>pansharpening (PNN)</subject><subject>Remote sensing</subject><subject>Satellites</subject><subject>Spatial data</subject><subject>Task analysis</subject><subject>Training</subject><issn>1545-598X</issn><issn>1558-0571</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kEFLAzEQhYMoWKs_QLwseN6aZJNNcixVq1BUqgVvIZudaGrd1GRb8d-7a4unN8y8Nzw-hM4JHhGC1dVsOn8eUUzUiCqmBOMHaEA4lznmghz2M-M5V_L1GJ2ktMSYMinFAC3mkHy9MavsprGhhphfw59mk9DUvvWh6W5TaCCa1m8hG9dbiMlE360foP0O8SNzIWZPpknvJq6h8c3bKTpyZpXgbK9DtLi9eZnc5bPH6f1kPMstVUWbOytVwbngQilbSasEwVR25SsmBMOuqoTltiylJLgsqbDOQVlbRZ0BQRUuhuhy93cdw9cGUquXYRO7xklTVkhRcl6QzkV2LhtDShGcXkf_aeKPJlj39HRPT_f09J5el7nYZTwA_PulVIwJVfwCC71row</recordid><startdate>20200901</startdate><enddate>20200901</enddate><creator>Shao, Zhimin</creator><creator>Lu, Zexin</creator><creator>Ran, Maosong</creator><creator>Fang, Leyuan</creator><creator>Zhou, Jiliu</creator><creator>Zhang, Yi</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7TG</scope><scope>7UA</scope><scope>8FD</scope><scope>C1K</scope><scope>F1W</scope><scope>FR3</scope><scope>H8D</scope><scope>H96</scope><scope>JQ2</scope><scope>KL.</scope><scope>KR7</scope><scope>L.G</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0003-2351-4461</orcidid><orcidid>https://orcid.org/0000-0001-7201-2092</orcidid></search><sort><creationdate>20200901</creationdate><title>Residual Encoder-Decoder Conditional Generative Adversarial Network for Pansharpening</title><author>Shao, Zhimin ; Lu, Zexin ; Ran, Maosong ; Fang, Leyuan ; Zhou, Jiliu ; Zhang, Yi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c293t-fc8935575799cb8c971028745b47740fbb7c5c6688106627cffe6dc92fae72903</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Coders</topic><topic>Decoding</topic><topic>Deep learning</topic><topic>Feature extraction</topic><topic>Gallium nitride</topic><topic>generative adversarial network (GAN)</topic><topic>Generative adversarial networks</topic><topic>Generators</topic><topic>Image acquisition</topic><topic>Image resolution</topic><topic>Machine learning</topic><topic>multispectral (MS) image</topic><topic>Neural networks</topic><topic>panchromatic (PAN)</topic><topic>pansharpening (PNN)</topic><topic>Remote sensing</topic><topic>Satellites</topic><topic>Spatial data</topic><topic>Task analysis</topic><topic>Training</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Shao, Zhimin</creatorcontrib><creatorcontrib>Lu, Zexin</creatorcontrib><creatorcontrib>Ran, Maosong</creatorcontrib><creatorcontrib>Fang, Leyuan</creatorcontrib><creatorcontrib>Zhou, Jiliu</creatorcontrib><creatorcontrib>Zhang, Yi</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Meteorological & Geoastrophysical Abstracts</collection><collection>Water Resources Abstracts</collection><collection>Technology Research Database</collection><collection>Environmental Sciences and Pollution Management</collection><collection>ASFA: Aquatic Sciences and Fisheries Abstracts</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>Aquatic Science & Fisheries Abstracts (ASFA) 2: Ocean Technology, Policy & Non-Living Resources</collection><collection>ProQuest Computer Science Collection</collection><collection>Meteorological & Geoastrophysical Abstracts - Academic</collection><collection>Civil Engineering Abstracts</collection><collection>Aquatic Science & Fisheries Abstracts (ASFA) Professional</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE geoscience and remote sensing letters</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Shao, Zhimin</au><au>Lu, Zexin</au><au>Ran, Maosong</au><au>Fang, Leyuan</au><au>Zhou, Jiliu</au><au>Zhang, Yi</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Residual Encoder-Decoder Conditional Generative Adversarial Network for Pansharpening</atitle><jtitle>IEEE geoscience and remote sensing letters</jtitle><stitle>LGRS</stitle><date>2020-09-01</date><risdate>2020</risdate><volume>17</volume><issue>9</issue><spage>1573</spage><epage>1577</epage><pages>1573-1577</pages><issn>1545-598X</issn><eissn>1558-0571</eissn><coden>IGRSBY</coden><abstract>Due to the limitation of the satellite sensor, it is difficult to acquire a high-resolution (HR) multispectral (HRMS) image directly. The aim of pansharpening (PNN) is to fuse the spatial in panchromatic (PAN) with the spectral information in multispectral (MS). Recently, deep learning has drawn much attention, and in the field of remote sensing, several pioneering attempts have been made related to PNN. However, the big size of remote sensing data will produce more training samples, which require a deeper neural network. Most current networks are relatively shallow and raise the possibility of detail loss. In this letter, we propose a residual encoder-decoder conditional generative adversarial network (RED-cGAN) for PNN to produce more details with sharpened images. The proposed method combines the idea of an autoencoder with generative adversarial network (GAN), which can effectively preserve the spatial and spectral information of the PAN and MS images simultaneously. First, the residual encoder-decoder module is adopted to extract the multiscale features from the last step to yield pansharpened images and relieve the training difficulty caused by deepening the network layers. Second, to further enhance the performance of the generator to preserve more spatial information, a conditional discriminator network with the input of PAN and MS images is proposed to encourage that the estimated MS images share the same distribution as that of the referenced HRMS images. The experiments conducted on the Worldview2 (WV2) and Worldview3 (WV3) images demonstrate that our proposed method provides better results than several state-of-the-art PNN methods.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/LGRS.2019.2949745</doi><tpages>5</tpages><orcidid>https://orcid.org/0000-0003-2351-4461</orcidid><orcidid>https://orcid.org/0000-0001-7201-2092</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1545-598X |
ispartof | IEEE geoscience and remote sensing letters, 2020-09, Vol.17 (9), p.1573-1577 |
issn | 1545-598X 1558-0571 |
language | eng |
recordid | cdi_proquest_journals_2438765531 |
source | IEEE Electronic Library (IEL) |
subjects | Coders Decoding Deep learning Feature extraction Gallium nitride generative adversarial network (GAN) Generative adversarial networks Generators Image acquisition Image resolution Machine learning multispectral (MS) image Neural networks panchromatic (PAN) pansharpening (PNN) Remote sensing Satellites Spatial data Task analysis Training |
title | Residual Encoder-Decoder Conditional Generative Adversarial Network for Pansharpening |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-21T09%3A37%3A07IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Residual%20Encoder-Decoder%20Conditional%20Generative%20Adversarial%20Network%20for%20Pansharpening&rft.jtitle=IEEE%20geoscience%20and%20remote%20sensing%20letters&rft.au=Shao,%20Zhimin&rft.date=2020-09-01&rft.volume=17&rft.issue=9&rft.spage=1573&rft.epage=1577&rft.pages=1573-1577&rft.issn=1545-598X&rft.eissn=1558-0571&rft.coden=IGRSBY&rft_id=info:doi/10.1109/LGRS.2019.2949745&rft_dat=%3Cproquest_RIE%3E2438765531%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2438765531&rft_id=info:pmid/&rft_ieee_id=8894479&rfr_iscdi=true |