3D No-Reference Image Quality Assessment via Transfer Learning and Saliency-Guided Feature Consolidation
Motivated by the success of convolutional neural networks (CNNs) in image-related applications, in this paper, we design an effective method for no-reference 3D image quality assessment (3D IQA) through CNN-based feature extraction and consolidation strategy. In the first and most vital stage, quali...
Gespeichert in:
Veröffentlicht in: | IEEE access 2019, Vol.7, p.85286-85297 |
---|---|
Hauptverfasser: | , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 85297 |
---|---|
container_issue | |
container_start_page | 85286 |
container_title | IEEE access |
container_volume | 7 |
creator | Xu, Xiaogang Shi, Bufan Gu, Zijin Deng, Ruizhe Chen, Xiaodong Krylov, Andrey S. Ding, Yong |
description | Motivated by the success of convolutional neural networks (CNNs) in image-related applications, in this paper, we design an effective method for no-reference 3D image quality assessment (3D IQA) through CNN-based feature extraction and consolidation strategy. In the first and most vital stage, quality-aware features, which reflect the inherent quality of images, are extracted by a fine-tuned CNN model exploiting the concept of transfer learning. This fine-tuning strategy solves the large-scale training data dependence existing in current deep-learning-based IQA algorithms. In the second stage, features from the left and right view are consolidated by linear weighted fusion where the weight for each image is obtained from its saliency map. In addition, the statistical characteristics of the disparity map are also considered in a multi-scale manner as additional features. In the final stage of quality mapping, the objective score for each stereoscopic pair is gained by support vector regression. The experimental results on the public databases show that our approach outperforms many existing no-reference and even full-reference methods. |
doi_str_mv | 10.1109/ACCESS.2019.2925084 |
format | Article |
fullrecord | <record><control><sourceid>proquest_ieee_</sourceid><recordid>TN_cdi_ieee_primary_8746267</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8746267</ieee_id><doaj_id>oai_doaj_org_article_9d8246e8aca84c178906e969312abdda</doaj_id><sourcerecordid>2455627015</sourcerecordid><originalsourceid>FETCH-LOGICAL-c408t-9625eb51e76b9516c1927f13dc693d52d5ab03ff2f3654fc46c5de268553d7003</originalsourceid><addsrcrecordid>eNpNUU1rGzEQXUoLDWl-QS6CntfVx0orHc02SQ2mpXV6FmNp1pWxpVTaLfjfV-mG0LnM8HgfA69pbhldMUbNp_Uw3O12K06ZWXHDJdXdm-aKM2VaIYV6-9_9vrkp5Ujr6ArJ_qr5JT6Tr6n9gSNmjA7J5gwHJN9nOIXpQtalYClnjBP5E4A8ZoilMskWIccQDwSiJ7vKrdpL-zAHj57cI0xzRjKkWNIpeJhCih-adyOcCt687Ovm5_3d4_Cl3X572Azrbes6qqfWKC5xLxn2am8kU44Z3o9MeKeM8JJ7CXsqxpGPQsludJ1y0iNXWkrhe0rFdbNZfH2Co33K4Qz5YhME-w9I-WAhT8Gd0BqveadQgwPdOdZrQxWamsM47L2H6vVx8XrK6feMZbLHNOdY37e8k1LxnjJZWWJhuZxKyTi-pjJqnxuyS0P2uSH70lBV3S6qgIivCt13iqte_AXbdYuu</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2455627015</pqid></control><display><type>article</type><title>3D No-Reference Image Quality Assessment via Transfer Learning and Saliency-Guided Feature Consolidation</title><source>IEEE Open Access Journals</source><source>DOAJ Directory of Open Access Journals</source><source>EZB-FREE-00999 freely available EZB journals</source><creator>Xu, Xiaogang ; Shi, Bufan ; Gu, Zijin ; Deng, Ruizhe ; Chen, Xiaodong ; Krylov, Andrey S. ; Ding, Yong</creator><creatorcontrib>Xu, Xiaogang ; Shi, Bufan ; Gu, Zijin ; Deng, Ruizhe ; Chen, Xiaodong ; Krylov, Andrey S. ; Ding, Yong</creatorcontrib><description>Motivated by the success of convolutional neural networks (CNNs) in image-related applications, in this paper, we design an effective method for no-reference 3D image quality assessment (3D IQA) through CNN-based feature extraction and consolidation strategy. In the first and most vital stage, quality-aware features, which reflect the inherent quality of images, are extracted by a fine-tuned CNN model exploiting the concept of transfer learning. This fine-tuning strategy solves the large-scale training data dependence existing in current deep-learning-based IQA algorithms. In the second stage, features from the left and right view are consolidated by linear weighted fusion where the weight for each image is obtained from its saliency map. In addition, the statistical characteristics of the disparity map are also considered in a multi-scale manner as additional features. In the final stage of quality mapping, the objective score for each stereoscopic pair is gained by support vector regression. The experimental results on the public databases show that our approach outperforms many existing no-reference and even full-reference methods.</description><identifier>ISSN: 2169-3536</identifier><identifier>EISSN: 2169-3536</identifier><identifier>DOI: 10.1109/ACCESS.2019.2925084</identifier><identifier>CODEN: IAECCG</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Algorithms ; Artificial neural networks ; Consolidation ; deep neural network ; Distortion ; Feature extraction ; Image quality ; Machine learning ; No-reference 3D image quality assessment ; Quality assessment ; Salience ; Statistical analysis ; Support vector machines ; Three-dimensional displays ; Training data ; transfer learning ; Two dimensional displays</subject><ispartof>IEEE access, 2019, Vol.7, p.85286-85297</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c408t-9625eb51e76b9516c1927f13dc693d52d5ab03ff2f3654fc46c5de268553d7003</citedby><cites>FETCH-LOGICAL-c408t-9625eb51e76b9516c1927f13dc693d52d5ab03ff2f3654fc46c5de268553d7003</cites><orcidid>0000-0002-1870-0991 ; 0000-0002-9213-8001</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8746267$$EHTML$$P50$$Gieee$$Hfree_for_read</linktohtml><link.rule.ids>314,776,780,860,2096,4010,27610,27900,27901,27902,54908</link.rule.ids></links><search><creatorcontrib>Xu, Xiaogang</creatorcontrib><creatorcontrib>Shi, Bufan</creatorcontrib><creatorcontrib>Gu, Zijin</creatorcontrib><creatorcontrib>Deng, Ruizhe</creatorcontrib><creatorcontrib>Chen, Xiaodong</creatorcontrib><creatorcontrib>Krylov, Andrey S.</creatorcontrib><creatorcontrib>Ding, Yong</creatorcontrib><title>3D No-Reference Image Quality Assessment via Transfer Learning and Saliency-Guided Feature Consolidation</title><title>IEEE access</title><addtitle>Access</addtitle><description>Motivated by the success of convolutional neural networks (CNNs) in image-related applications, in this paper, we design an effective method for no-reference 3D image quality assessment (3D IQA) through CNN-based feature extraction and consolidation strategy. In the first and most vital stage, quality-aware features, which reflect the inherent quality of images, are extracted by a fine-tuned CNN model exploiting the concept of transfer learning. This fine-tuning strategy solves the large-scale training data dependence existing in current deep-learning-based IQA algorithms. In the second stage, features from the left and right view are consolidated by linear weighted fusion where the weight for each image is obtained from its saliency map. In addition, the statistical characteristics of the disparity map are also considered in a multi-scale manner as additional features. In the final stage of quality mapping, the objective score for each stereoscopic pair is gained by support vector regression. The experimental results on the public databases show that our approach outperforms many existing no-reference and even full-reference methods.</description><subject>Algorithms</subject><subject>Artificial neural networks</subject><subject>Consolidation</subject><subject>deep neural network</subject><subject>Distortion</subject><subject>Feature extraction</subject><subject>Image quality</subject><subject>Machine learning</subject><subject>No-reference 3D image quality assessment</subject><subject>Quality assessment</subject><subject>Salience</subject><subject>Statistical analysis</subject><subject>Support vector machines</subject><subject>Three-dimensional displays</subject><subject>Training data</subject><subject>transfer learning</subject><subject>Two dimensional displays</subject><issn>2169-3536</issn><issn>2169-3536</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>ESBDL</sourceid><sourceid>RIE</sourceid><sourceid>DOA</sourceid><recordid>eNpNUU1rGzEQXUoLDWl-QS6CntfVx0orHc02SQ2mpXV6FmNp1pWxpVTaLfjfV-mG0LnM8HgfA69pbhldMUbNp_Uw3O12K06ZWXHDJdXdm-aKM2VaIYV6-9_9vrkp5Ujr6ArJ_qr5JT6Tr6n9gSNmjA7J5gwHJN9nOIXpQtalYClnjBP5E4A8ZoilMskWIccQDwSiJ7vKrdpL-zAHj57cI0xzRjKkWNIpeJhCih-adyOcCt687Ovm5_3d4_Cl3X572Azrbes6qqfWKC5xLxn2am8kU44Z3o9MeKeM8JJ7CXsqxpGPQsludJ1y0iNXWkrhe0rFdbNZfH2Co33K4Qz5YhME-w9I-WAhT8Gd0BqveadQgwPdOdZrQxWamsM47L2H6vVx8XrK6feMZbLHNOdY37e8k1LxnjJZWWJhuZxKyTi-pjJqnxuyS0P2uSH70lBV3S6qgIivCt13iqte_AXbdYuu</recordid><startdate>2019</startdate><enddate>2019</enddate><creator>Xu, Xiaogang</creator><creator>Shi, Bufan</creator><creator>Gu, Zijin</creator><creator>Deng, Ruizhe</creator><creator>Chen, Xiaodong</creator><creator>Krylov, Andrey S.</creator><creator>Ding, Yong</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>ESBDL</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7SR</scope><scope>8BQ</scope><scope>8FD</scope><scope>JG9</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0002-1870-0991</orcidid><orcidid>https://orcid.org/0000-0002-9213-8001</orcidid></search><sort><creationdate>2019</creationdate><title>3D No-Reference Image Quality Assessment via Transfer Learning and Saliency-Guided Feature Consolidation</title><author>Xu, Xiaogang ; Shi, Bufan ; Gu, Zijin ; Deng, Ruizhe ; Chen, Xiaodong ; Krylov, Andrey S. ; Ding, Yong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c408t-9625eb51e76b9516c1927f13dc693d52d5ab03ff2f3654fc46c5de268553d7003</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Algorithms</topic><topic>Artificial neural networks</topic><topic>Consolidation</topic><topic>deep neural network</topic><topic>Distortion</topic><topic>Feature extraction</topic><topic>Image quality</topic><topic>Machine learning</topic><topic>No-reference 3D image quality assessment</topic><topic>Quality assessment</topic><topic>Salience</topic><topic>Statistical analysis</topic><topic>Support vector machines</topic><topic>Three-dimensional displays</topic><topic>Training data</topic><topic>transfer learning</topic><topic>Two dimensional displays</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Xu, Xiaogang</creatorcontrib><creatorcontrib>Shi, Bufan</creatorcontrib><creatorcontrib>Gu, Zijin</creatorcontrib><creatorcontrib>Deng, Ruizhe</creatorcontrib><creatorcontrib>Chen, Xiaodong</creatorcontrib><creatorcontrib>Krylov, Andrey S.</creatorcontrib><creatorcontrib>Ding, Yong</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE Open Access Journals</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>IEEE access</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Xu, Xiaogang</au><au>Shi, Bufan</au><au>Gu, Zijin</au><au>Deng, Ruizhe</au><au>Chen, Xiaodong</au><au>Krylov, Andrey S.</au><au>Ding, Yong</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>3D No-Reference Image Quality Assessment via Transfer Learning and Saliency-Guided Feature Consolidation</atitle><jtitle>IEEE access</jtitle><stitle>Access</stitle><date>2019</date><risdate>2019</risdate><volume>7</volume><spage>85286</spage><epage>85297</epage><pages>85286-85297</pages><issn>2169-3536</issn><eissn>2169-3536</eissn><coden>IAECCG</coden><abstract>Motivated by the success of convolutional neural networks (CNNs) in image-related applications, in this paper, we design an effective method for no-reference 3D image quality assessment (3D IQA) through CNN-based feature extraction and consolidation strategy. In the first and most vital stage, quality-aware features, which reflect the inherent quality of images, are extracted by a fine-tuned CNN model exploiting the concept of transfer learning. This fine-tuning strategy solves the large-scale training data dependence existing in current deep-learning-based IQA algorithms. In the second stage, features from the left and right view are consolidated by linear weighted fusion where the weight for each image is obtained from its saliency map. In addition, the statistical characteristics of the disparity map are also considered in a multi-scale manner as additional features. In the final stage of quality mapping, the objective score for each stereoscopic pair is gained by support vector regression. The experimental results on the public databases show that our approach outperforms many existing no-reference and even full-reference methods.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/ACCESS.2019.2925084</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0002-1870-0991</orcidid><orcidid>https://orcid.org/0000-0002-9213-8001</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2169-3536 |
ispartof | IEEE access, 2019, Vol.7, p.85286-85297 |
issn | 2169-3536 2169-3536 |
language | eng |
recordid | cdi_ieee_primary_8746267 |
source | IEEE Open Access Journals; DOAJ Directory of Open Access Journals; EZB-FREE-00999 freely available EZB journals |
subjects | Algorithms Artificial neural networks Consolidation deep neural network Distortion Feature extraction Image quality Machine learning No-reference 3D image quality assessment Quality assessment Salience Statistical analysis Support vector machines Three-dimensional displays Training data transfer learning Two dimensional displays |
title | 3D No-Reference Image Quality Assessment via Transfer Learning and Saliency-Guided Feature Consolidation |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-10T03%3A02%3A59IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=3D%20No-Reference%20Image%20Quality%20Assessment%20via%20Transfer%20Learning%20and%20Saliency-Guided%20Feature%20Consolidation&rft.jtitle=IEEE%20access&rft.au=Xu,%20Xiaogang&rft.date=2019&rft.volume=7&rft.spage=85286&rft.epage=85297&rft.pages=85286-85297&rft.issn=2169-3536&rft.eissn=2169-3536&rft.coden=IAECCG&rft_id=info:doi/10.1109/ACCESS.2019.2925084&rft_dat=%3Cproquest_ieee_%3E2455627015%3C/proquest_ieee_%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2455627015&rft_id=info:pmid/&rft_ieee_id=8746267&rft_doaj_id=oai_doaj_org_article_9d8246e8aca84c178906e969312abdda&rfr_iscdi=true |