No-Reference Stereoscopic Image Quality Assessment Using Convolutional Neural Network for Adaptive Feature Extraction

The pervasion of 3-D technologies over the years gives rise to the increasing demands of accurate and efficient stereoscopic image quality assessment (SIQA) methods, designed to automatically supervise and optimize 3-D image and video processing systems. Though 2-D IQA has attracted considerable att...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2018-01, Vol.6, p.37595-37603
Hauptverfasser: Ding, Yong, Deng, Ruizhe, Xie, Xin, Xu, Xiaogang, Zhao, Yang, Chen, Xiaodong, Krylov, Andrey S.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The pervasion of 3-D technologies over the years gives rise to the increasing demands of accurate and efficient stereoscopic image quality assessment (SIQA) methods, designed to automatically supervise and optimize 3-D image and video processing systems. Though 2-D IQA has attracted considerable attention, its 3-D counterpart is yet to be well explored. In this paper, a no-reference SIQA method using convolution neural network (CNN) for feature extraction is proposed. In the proposed method, a CNN model is trained from scratch to classify images according to their perceptual quality, with quality-aware monocular features extracted from a higher level layer of the network. Then, visual saliency models are utilized to fuse the captured monocular features. In the meanwhile, multi-scale statistical features are derived from the binocular disparity maps. Finally, the fused CNN features and the disparity features are synthesized by support vector regression into the objective quality score of the stereoscopic image. Experimental results on two public databases demonstrate the superior performance of the proposed method over other state-of-the-art methods, in terms of its accuracy in predicting stereoscopic image quality as well as its robustness across different databases and distortion types.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2018.2851255