Latent-Shift: Gradient of Entropy Helps Neural Codecs
End-to-end image/video codecs are getting competitive compared to traditional compression techniques that have been developed through decades of manual engineering efforts. These trainable codecs have many advantages over traditional techniques such as easy adaptation on perceptual distortion metric...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | End-to-end image/video codecs are getting competitive compared to traditional
compression techniques that have been developed through decades of manual
engineering efforts. These trainable codecs have many advantages over
traditional techniques such as easy adaptation on perceptual distortion metrics
and high performance on specific domains thanks to their learning ability.
However, state of the art neural codecs does not take advantage of the
existence of gradient of entropy in decoding device. In this paper, we
theoretically show that gradient of entropy (available at decoder side) is
correlated with the gradient of the reconstruction error (which is not
available at decoder side). We then demonstrate experimentally that this
gradient can be used on various compression methods, leading to a $1-2\%$ rate
savings for the same quality. Our method is orthogonal to other improvements
and brings independent rate savings. |
---|---|
DOI: | 10.48550/arxiv.2308.00725 |