Hyperspectral Image Compression via Cross-Channel Contrastive Learning
In recent years, advances in deep learning have greatly promoted the development of hyperspectral image (HSI) compression algorithms. However, most existing compression approaches directly rely on rate-distortion optimization without other guidance during model learning. Therefore, this brings chall...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on geoscience and remote sensing 2023-01, Vol.61, p.1-1 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 1 |
---|---|
container_issue | |
container_start_page | 1 |
container_title | IEEE transactions on geoscience and remote sensing |
container_volume | 61 |
creator | Guo, Yuanyuan Chong, Yanwen Pan, Shaoming |
description | In recent years, advances in deep learning have greatly promoted the development of hyperspectral image (HSI) compression algorithms. However, most existing compression approaches directly rely on rate-distortion optimization without other guidance during model learning. Therefore, this brings challenges to distinguishing similar features or objects that are widely available in HSIs, especially in remote sensing scenes, since quantification in lossy compression can cause informative attribute (e.g., category) collapse and loss problems at high compression ratios. In this paper, we propose a novel hyperspectral compression network via contrastive learning (HCCNet) to help generate discriminative representations and preserve informative attributes as much as possible. Specifically, we design a contrastive informative feature encoding (CIFE) to extract and organize discriminative attributes from the original HSIs by enlarging the discrimination over the learned latents in different channel indexes to relieve attribute collapses. In the case of attribute losses, we define a contrastive invariant feature recovery (CIFR) to discover the lost attributes via contrastive feature refinement. Experiments on five different HSI datasets illustrate that the proposed HCCNet can achieve impressive compression performance, such as improvement of the peak signal-to-noise ratio (PSNR) from 28.86 dB (at 0.2284 bpppb) to 30.30 dB (at 0.1960 bpppb) on the Chikusei dataset. |
doi_str_mv | 10.1109/TGRS.2023.3282186 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2826475872</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10143262</ieee_id><sourcerecordid>2826475872</sourcerecordid><originalsourceid>FETCH-LOGICAL-c294t-267db5dfb308bf340f962bb2da5c8e197fd4bcd8d9bf52c5bc14c5ce079057813</originalsourceid><addsrcrecordid>eNpNkMtKw0AUhgdRsFYfQHARcJ0698wsJWhbKAha18PM5ExNaZM4kxb69ia0C1dncb7_XD6EHgmeEYL1y3r--TWjmLIZo4oSJa_QhAihciw5v0YTTLTMqdL0Ft2ltMWYcEGKCXpfnDqIqQPfR7vLlnu7gaxs912ElOq2yY61zcrYppSXP7ZpYDd0m4FNfX2EbAU2NnWzuUc3we4SPFzqFH2_v63LRb76mC_L11XuqeZ9TmVROVEFx7BygXEctKTO0coKr4DoIlTc-UpV2gVBvXCecC884EJjUSjCpuj5PLeL7e8BUm-27SE2w0ozvC15IVRBB4qcKT8eHiGYLtZ7G0-GYDPqMqMuM-oyF11D5umcqQHgH084o5KyP7dqZyg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2826475872</pqid></control><display><type>article</type><title>Hyperspectral Image Compression via Cross-Channel Contrastive Learning</title><source>IEEE Electronic Library (IEL)</source><creator>Guo, Yuanyuan ; Chong, Yanwen ; Pan, Shaoming</creator><creatorcontrib>Guo, Yuanyuan ; Chong, Yanwen ; Pan, Shaoming</creatorcontrib><description>In recent years, advances in deep learning have greatly promoted the development of hyperspectral image (HSI) compression algorithms. However, most existing compression approaches directly rely on rate-distortion optimization without other guidance during model learning. Therefore, this brings challenges to distinguishing similar features or objects that are widely available in HSIs, especially in remote sensing scenes, since quantification in lossy compression can cause informative attribute (e.g., category) collapse and loss problems at high compression ratios. In this paper, we propose a novel hyperspectral compression network via contrastive learning (HCCNet) to help generate discriminative representations and preserve informative attributes as much as possible. Specifically, we design a contrastive informative feature encoding (CIFE) to extract and organize discriminative attributes from the original HSIs by enlarging the discrimination over the learned latents in different channel indexes to relieve attribute collapses. In the case of attribute losses, we define a contrastive invariant feature recovery (CIFR) to discover the lost attributes via contrastive feature refinement. Experiments on five different HSI datasets illustrate that the proposed HCCNet can achieve impressive compression performance, such as improvement of the peak signal-to-noise ratio (PSNR) from 28.86 dB (at 0.2284 bpppb) to 30.30 dB (at 0.1960 bpppb) on the Chikusei dataset.</description><identifier>ISSN: 0196-2892</identifier><identifier>EISSN: 1558-0644</identifier><identifier>DOI: 10.1109/TGRS.2023.3282186</identifier><identifier>CODEN: IGRSD2</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Algorithms ; Compression ; Compression ratio ; Contrastive learning ; Datasets ; Deep learning ; Entropy ; high-quality reconstruction ; hyperspectral image compression ; Hyperspectral imaging ; Image coding ; Image compression ; Image reconstruction ; Optimization ; Performance indices ; Rate-distortion ; Remote sensing ; Signal to noise ratio ; Transforms</subject><ispartof>IEEE transactions on geoscience and remote sensing, 2023-01, Vol.61, p.1-1</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c294t-267db5dfb308bf340f962bb2da5c8e197fd4bcd8d9bf52c5bc14c5ce079057813</citedby><cites>FETCH-LOGICAL-c294t-267db5dfb308bf340f962bb2da5c8e197fd4bcd8d9bf52c5bc14c5ce079057813</cites><orcidid>0000-0001-6789-3876 ; 0000-0002-0223-9037 ; 0000-0002-7944-8515</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10143262$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10143262$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Guo, Yuanyuan</creatorcontrib><creatorcontrib>Chong, Yanwen</creatorcontrib><creatorcontrib>Pan, Shaoming</creatorcontrib><title>Hyperspectral Image Compression via Cross-Channel Contrastive Learning</title><title>IEEE transactions on geoscience and remote sensing</title><addtitle>TGRS</addtitle><description>In recent years, advances in deep learning have greatly promoted the development of hyperspectral image (HSI) compression algorithms. However, most existing compression approaches directly rely on rate-distortion optimization without other guidance during model learning. Therefore, this brings challenges to distinguishing similar features or objects that are widely available in HSIs, especially in remote sensing scenes, since quantification in lossy compression can cause informative attribute (e.g., category) collapse and loss problems at high compression ratios. In this paper, we propose a novel hyperspectral compression network via contrastive learning (HCCNet) to help generate discriminative representations and preserve informative attributes as much as possible. Specifically, we design a contrastive informative feature encoding (CIFE) to extract and organize discriminative attributes from the original HSIs by enlarging the discrimination over the learned latents in different channel indexes to relieve attribute collapses. In the case of attribute losses, we define a contrastive invariant feature recovery (CIFR) to discover the lost attributes via contrastive feature refinement. Experiments on five different HSI datasets illustrate that the proposed HCCNet can achieve impressive compression performance, such as improvement of the peak signal-to-noise ratio (PSNR) from 28.86 dB (at 0.2284 bpppb) to 30.30 dB (at 0.1960 bpppb) on the Chikusei dataset.</description><subject>Algorithms</subject><subject>Compression</subject><subject>Compression ratio</subject><subject>Contrastive learning</subject><subject>Datasets</subject><subject>Deep learning</subject><subject>Entropy</subject><subject>high-quality reconstruction</subject><subject>hyperspectral image compression</subject><subject>Hyperspectral imaging</subject><subject>Image coding</subject><subject>Image compression</subject><subject>Image reconstruction</subject><subject>Optimization</subject><subject>Performance indices</subject><subject>Rate-distortion</subject><subject>Remote sensing</subject><subject>Signal to noise ratio</subject><subject>Transforms</subject><issn>0196-2892</issn><issn>1558-0644</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkMtKw0AUhgdRsFYfQHARcJ0698wsJWhbKAha18PM5ExNaZM4kxb69ia0C1dncb7_XD6EHgmeEYL1y3r--TWjmLIZo4oSJa_QhAihciw5v0YTTLTMqdL0Ft2ltMWYcEGKCXpfnDqIqQPfR7vLlnu7gaxs912ElOq2yY61zcrYppSXP7ZpYDd0m4FNfX2EbAU2NnWzuUc3we4SPFzqFH2_v63LRb76mC_L11XuqeZ9TmVROVEFx7BygXEctKTO0coKr4DoIlTc-UpV2gVBvXCecC884EJjUSjCpuj5PLeL7e8BUm-27SE2w0ozvC15IVRBB4qcKT8eHiGYLtZ7G0-GYDPqMqMuM-oyF11D5umcqQHgH084o5KyP7dqZyg</recordid><startdate>20230101</startdate><enddate>20230101</enddate><creator>Guo, Yuanyuan</creator><creator>Chong, Yanwen</creator><creator>Pan, Shaoming</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7UA</scope><scope>8FD</scope><scope>C1K</scope><scope>F1W</scope><scope>FR3</scope><scope>H8D</scope><scope>H96</scope><scope>KR7</scope><scope>L.G</scope><scope>L7M</scope><orcidid>https://orcid.org/0000-0001-6789-3876</orcidid><orcidid>https://orcid.org/0000-0002-0223-9037</orcidid><orcidid>https://orcid.org/0000-0002-7944-8515</orcidid></search><sort><creationdate>20230101</creationdate><title>Hyperspectral Image Compression via Cross-Channel Contrastive Learning</title><author>Guo, Yuanyuan ; Chong, Yanwen ; Pan, Shaoming</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c294t-267db5dfb308bf340f962bb2da5c8e197fd4bcd8d9bf52c5bc14c5ce079057813</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Algorithms</topic><topic>Compression</topic><topic>Compression ratio</topic><topic>Contrastive learning</topic><topic>Datasets</topic><topic>Deep learning</topic><topic>Entropy</topic><topic>high-quality reconstruction</topic><topic>hyperspectral image compression</topic><topic>Hyperspectral imaging</topic><topic>Image coding</topic><topic>Image compression</topic><topic>Image reconstruction</topic><topic>Optimization</topic><topic>Performance indices</topic><topic>Rate-distortion</topic><topic>Remote sensing</topic><topic>Signal to noise ratio</topic><topic>Transforms</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Guo, Yuanyuan</creatorcontrib><creatorcontrib>Chong, Yanwen</creatorcontrib><creatorcontrib>Pan, Shaoming</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Water Resources Abstracts</collection><collection>Technology Research Database</collection><collection>Environmental Sciences and Pollution Management</collection><collection>ASFA: Aquatic Sciences and Fisheries Abstracts</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>Aquatic Science & Fisheries Abstracts (ASFA) 2: Ocean Technology, Policy & Non-Living Resources</collection><collection>Civil Engineering Abstracts</collection><collection>Aquatic Science & Fisheries Abstracts (ASFA) Professional</collection><collection>Advanced Technologies Database with Aerospace</collection><jtitle>IEEE transactions on geoscience and remote sensing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Guo, Yuanyuan</au><au>Chong, Yanwen</au><au>Pan, Shaoming</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Hyperspectral Image Compression via Cross-Channel Contrastive Learning</atitle><jtitle>IEEE transactions on geoscience and remote sensing</jtitle><stitle>TGRS</stitle><date>2023-01-01</date><risdate>2023</risdate><volume>61</volume><spage>1</spage><epage>1</epage><pages>1-1</pages><issn>0196-2892</issn><eissn>1558-0644</eissn><coden>IGRSD2</coden><abstract>In recent years, advances in deep learning have greatly promoted the development of hyperspectral image (HSI) compression algorithms. However, most existing compression approaches directly rely on rate-distortion optimization without other guidance during model learning. Therefore, this brings challenges to distinguishing similar features or objects that are widely available in HSIs, especially in remote sensing scenes, since quantification in lossy compression can cause informative attribute (e.g., category) collapse and loss problems at high compression ratios. In this paper, we propose a novel hyperspectral compression network via contrastive learning (HCCNet) to help generate discriminative representations and preserve informative attributes as much as possible. Specifically, we design a contrastive informative feature encoding (CIFE) to extract and organize discriminative attributes from the original HSIs by enlarging the discrimination over the learned latents in different channel indexes to relieve attribute collapses. In the case of attribute losses, we define a contrastive invariant feature recovery (CIFR) to discover the lost attributes via contrastive feature refinement. Experiments on five different HSI datasets illustrate that the proposed HCCNet can achieve impressive compression performance, such as improvement of the peak signal-to-noise ratio (PSNR) from 28.86 dB (at 0.2284 bpppb) to 30.30 dB (at 0.1960 bpppb) on the Chikusei dataset.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TGRS.2023.3282186</doi><tpages>1</tpages><orcidid>https://orcid.org/0000-0001-6789-3876</orcidid><orcidid>https://orcid.org/0000-0002-0223-9037</orcidid><orcidid>https://orcid.org/0000-0002-7944-8515</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 0196-2892 |
ispartof | IEEE transactions on geoscience and remote sensing, 2023-01, Vol.61, p.1-1 |
issn | 0196-2892 1558-0644 |
language | eng |
recordid | cdi_proquest_journals_2826475872 |
source | IEEE Electronic Library (IEL) |
subjects | Algorithms Compression Compression ratio Contrastive learning Datasets Deep learning Entropy high-quality reconstruction hyperspectral image compression Hyperspectral imaging Image coding Image compression Image reconstruction Optimization Performance indices Rate-distortion Remote sensing Signal to noise ratio Transforms |
title | Hyperspectral Image Compression via Cross-Channel Contrastive Learning |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-19T00%3A38%3A46IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Hyperspectral%20Image%20Compression%20via%20Cross-Channel%20Contrastive%20Learning&rft.jtitle=IEEE%20transactions%20on%20geoscience%20and%20remote%20sensing&rft.au=Guo,%20Yuanyuan&rft.date=2023-01-01&rft.volume=61&rft.spage=1&rft.epage=1&rft.pages=1-1&rft.issn=0196-2892&rft.eissn=1558-0644&rft.coden=IGRSD2&rft_id=info:doi/10.1109/TGRS.2023.3282186&rft_dat=%3Cproquest_RIE%3E2826475872%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2826475872&rft_id=info:pmid/&rft_ieee_id=10143262&rfr_iscdi=true |