Hash-Based Line-by-Line Template Matching for Lossless Screen Image Coding

Template matching (TM) was proposed in the literature a decade ago to efficiently remove non-local redundancies within an image without transmitting any overhead of displacement vectors. However, the large computational complexity introduced at both the encoder and the decoder, especially for a larg...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing 2016-12, Vol.25 (12), p.5601-5609
Hauptverfasser: Peng, Xiulian, Xu, Jizheng
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Template matching (TM) was proposed in the literature a decade ago to efficiently remove non-local redundancies within an image without transmitting any overhead of displacement vectors. However, the large computational complexity introduced at both the encoder and the decoder, especially for a large search range, limits its widespread use. This paper proposes a hash-based line-by-line template matching (hLTM) for lossless screen image coding, where the non-local redundancy commonly exists in text and graphics parts. By hash-based search, it can largely reduce the search complexity of template matching without an accuracy degradation. Besides, the line-by-line template matching increases prediction accuracy by using a fine granularity. Experimental results show that the hLTM can significantly reduce both the encoding and decoding complexities by 68 and 23 times, respectively, compared with the traditional TM with a search radius of 128. Moreover, when compared with High Efficiency Video Coding screen content coding test model SCM-1.0, it can largely improve coding efficiency by up to 12.68% bits saving on screen contents with rich texts/graphics.
ISSN:1057-7149
1941-0042
DOI:10.1109/TIP.2016.2612884