Hyperspectral Image Compression via Cross-Channel Contrastive Learning
In recent years, advances in deep learning have greatly promoted the development of hyperspectral image (HSI) compression algorithms. However, most existing compression approaches directly rely on rate-distortion optimization without other guidance during model learning. Therefore, this brings chall...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on geoscience and remote sensing 2023-01, Vol.61, p.1-1 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In recent years, advances in deep learning have greatly promoted the development of hyperspectral image (HSI) compression algorithms. However, most existing compression approaches directly rely on rate-distortion optimization without other guidance during model learning. Therefore, this brings challenges to distinguishing similar features or objects that are widely available in HSIs, especially in remote sensing scenes, since quantification in lossy compression can cause informative attribute (e.g., category) collapse and loss problems at high compression ratios. In this paper, we propose a novel hyperspectral compression network via contrastive learning (HCCNet) to help generate discriminative representations and preserve informative attributes as much as possible. Specifically, we design a contrastive informative feature encoding (CIFE) to extract and organize discriminative attributes from the original HSIs by enlarging the discrimination over the learned latents in different channel indexes to relieve attribute collapses. In the case of attribute losses, we define a contrastive invariant feature recovery (CIFR) to discover the lost attributes via contrastive feature refinement. Experiments on five different HSI datasets illustrate that the proposed HCCNet can achieve impressive compression performance, such as improvement of the peak signal-to-noise ratio (PSNR) from 28.86 dB (at 0.2284 bpppb) to 30.30 dB (at 0.1960 bpppb) on the Chikusei dataset. |
---|---|
ISSN: | 0196-2892 1558-0644 |
DOI: | 10.1109/TGRS.2023.3282186 |