Adaptation and Attention for Neural Video Coding
Neural image coding represents now the state-of-the-art image compression approach. However, a lot of work is still to be done in the video domain. In this work, we propose an end-to-end learned video codec that introduces several architectural novelties as well as training novelties, revolving arou...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Neural image coding represents now the state-of-the-art image compression
approach. However, a lot of work is still to be done in the video domain. In
this work, we propose an end-to-end learned video codec that introduces several
architectural novelties as well as training novelties, revolving around the
concepts of adaptation and attention. Our codec is organized as an intra-frame
codec paired with an inter-frame codec. As one architectural novelty, we
propose to train the inter-frame codec model to adapt the motion estimation
process based on the resolution of the input video. A second architectural
novelty is a new neural block that combines concepts from split-attention based
neural networks and from DenseNets. Finally, we propose to overfit a set of
decoder-side multiplicative parameters at inference time. Through ablation
studies and comparisons to prior art, we show the benefits of our proposed
techniques in terms of coding gains. We compare our codec to VVC/H.266 and
RLVC, which represent the state-of-the-art traditional and end-to-end learned
codecs, respectively, and to the top performing end-to-end learned approach in
2021 CLIC competition, E2E_T_OL. Our codec clearly outperforms E2E_T_OL, and
compare favorably to VVC and RLVC in some settings. |
---|---|
DOI: | 10.48550/arxiv.2112.08767 |