Perceptual redundancy model for compression of screen content videos
Screen content video (SCV) consists primarily of text areas, computer graphics and other computer‐generated content and possesses unique perceptual characteristics. To compress SCVs more effectively with less reduction in subjective quality, perceptual characteristics of SCVs are analyzed and a perc...
Gespeichert in:
Veröffentlicht in: | IET image processing 2022-05, Vol.16 (6), p.1724-1741 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Screen content video (SCV) consists primarily of text areas, computer graphics and other computer‐generated content and possesses unique perceptual characteristics. To compress SCVs more effectively with less reduction in subjective quality, perceptual characteristics of SCVs are analyzed and a perceptual redundancy (PR) model for SCV compression is proposed, including spatial PR (SPR), temporal PR (TPR) and foveated PR (FPR) model. In SPR modeling, the SCV is divided into sharp edge (SE) areas and non‐SE areas, then SPR is estimated separately. In TPR modeling, both inter‐frame luminance adaptation effect and motion masking effect are taken into account. In FPR modeling, each frame of SCV is classified into abrupt frames, relative motion frames or static frames. Then fixation points of different kinds of frames are predicted using different methods, and FPR is modeled considering foveated masking effect and visual attention. Finally, the perceptual redundancy of SCV is estimated based on the product of SPR, TPR and FPR. It is experimentally demonstrated that compared to the state‐of‐the‐art models, the authors' model could obtain more accurate estimates of PR. Moreover, the model is incorporated into SCV compression with an adaptive perceptual quantizer. An average of 7.42% bits could be saved with less decline in subjective quality. |
---|---|
ISSN: | 1751-9659 1751-9667 |
DOI: | 10.1049/ipr2.12443 |