Image Compression Using Vector Quantization
Vector quantization is an efficient technique used for compressing images. It is based on the Shannon rate distortion theory, which says that better compression is achieved if samples are coded using vectors instead of scalars. The finite vectors of pixels are stored in a memory called codebook, whi...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Buchkapitel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Vector quantization is an efficient technique used for compressing images. It
is based on the Shannon rate distortion theory, which says that better compression is achieved if samples are coded using vectors instead of scalars.
The finite vectors of pixels are stored in a memory called codebook, which
is used for coding and decoding the images. The image to be compressed is
divided into blocks and called input vectors and are compared with vectors
in memory called codevectors for matches based on some distance criteria.
If the codevector matches the input vector, an index or address of memory
location is stored or transmitted. Because the address has less bits than the
codevector, compression is achieved. The decoding or decompression is the
inverse of encoding. The quality of the reconstructed images depends upon
proper design of the codebook. The algorithms used for the design of vector quantizers (VQs) (encoder and decoder), such as the oldest and famous
Linde-Buzo-Gray (LBG) algorithm, are discussed in detail. Various types of
VQs such as mean-removed, gain-shape, multistep, and others are presented.
The VQ designs using image transforms such as discrete cosine and wavelet
transforms are illustrated. The use of artificial neural networks in VQ design
is also discussed. The performance of all designed codebooks are compared. |
---|---|
DOI: | 10.1201/b17738-8 |