Deep CNN-Based Blind Image Quality Predictor
Image recognition based on convolutional neural networks (CNNs) has recently been shown to deliver the state-of-the-art performance in various areas of computer vision and image processing. Nevertheless, applying a deep CNN to no-reference image quality assessment (NR-IQA) remains a challenging task...
Gespeichert in:
Veröffentlicht in: | IEEE transaction on neural networks and learning systems 2019-01, Vol.30 (1), p.11-24 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 24 |
---|---|
container_issue | 1 |
container_start_page | 11 |
container_title | IEEE transaction on neural networks and learning systems |
container_volume | 30 |
creator | Kim, Jongyoo Nguyen, Anh-Duc Lee, Sanghoon |
description | Image recognition based on convolutional neural networks (CNNs) has recently been shown to deliver the state-of-the-art performance in various areas of computer vision and image processing. Nevertheless, applying a deep CNN to no-reference image quality assessment (NR-IQA) remains a challenging task due to critical obstacles, i.e., the lack of a training database. In this paper, we propose a CNN-based NR-IQA framework that can effectively solve this problem. The proposed method-deep image quality assessor (DIQA)-separates the training of NR-IQA into two stages: 1) an objective distortion part and 2) a human visual system-related part. In the first stage, the CNN learns to predict the objective error map, and then the model learns to predict subjective score in the second stage. To complement the inaccuracy of the objective error map prediction on the homogeneous region, we also propose a reliability map. Two simple handcrafted features were additionally employed to further enhance the accuracy. In addition, we propose a way to visualize perceptual error maps to analyze what was learned by the deep CNN model. In the experiments, the DIQA yielded the state-of-the-art accuracy on the various databases. |
doi_str_mv | 10.1109/TNNLS.2018.2829819 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2159994138</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8383698</ieee_id><sourcerecordid>2159994138</sourcerecordid><originalsourceid>FETCH-LOGICAL-c351t-9d686817f92aac183c01c389946b7001e9d79daf5ff7e92ddd90ebfbd0cadd8f3</originalsourceid><addsrcrecordid>eNpdkF1LwzAUhoMobsz9AQUpeOOFnflY0-TSza_BqIoTvAtZciId7TqT9mL_3tbNXZibE8hz3pPzIHRO8IgQLG8XWTZ_H1FMxIgKKgWRR6hPCacxZUIcH-7pZw8NQ1jh9nCc8LE8RT0qpRzTFPfRzT3AJppmWTzRAWw0KfK1jWal_oLordFFXm-jVw82N3Xlz9CJ00WA4b4O0Mfjw2L6HM9fnmbTu3lsWELqWFouuCCpk1RrQwQzmBgm2pF8mWJMQNpUWu0S51KQ1ForMSzd0mKjrRWODdD1Lnfjq-8GQq3KPBgoCr2GqgmKYi5YF5i06NU_dFU1ft3-TlGSdGsSJlqK7ijjqxA8OLXxean9VhGsOp3qV6fqdKq9zrbpch_dLEuwh5Y_eS1wsQNyADg8i3Yel4L9AFDsduQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2159994138</pqid></control><display><type>article</type><title>Deep CNN-Based Blind Image Quality Predictor</title><source>IEEE Electronic Library (IEL)</source><creator>Kim, Jongyoo ; Nguyen, Anh-Duc ; Lee, Sanghoon</creator><creatorcontrib>Kim, Jongyoo ; Nguyen, Anh-Duc ; Lee, Sanghoon</creatorcontrib><description>Image recognition based on convolutional neural networks (CNNs) has recently been shown to deliver the state-of-the-art performance in various areas of computer vision and image processing. Nevertheless, applying a deep CNN to no-reference image quality assessment (NR-IQA) remains a challenging task due to critical obstacles, i.e., the lack of a training database. In this paper, we propose a CNN-based NR-IQA framework that can effectively solve this problem. The proposed method-deep image quality assessor (DIQA)-separates the training of NR-IQA into two stages: 1) an objective distortion part and 2) a human visual system-related part. In the first stage, the CNN learns to predict the objective error map, and then the model learns to predict subjective score in the second stage. To complement the inaccuracy of the objective error map prediction on the homogeneous region, we also propose a reliability map. Two simple handcrafted features were additionally employed to further enhance the accuracy. In addition, we propose a way to visualize perceptual error maps to analyze what was learned by the deep CNN model. In the experiments, the DIQA yielded the state-of-the-art accuracy on the various databases.</description><identifier>ISSN: 2162-237X</identifier><identifier>EISSN: 2162-2388</identifier><identifier>DOI: 10.1109/TNNLS.2018.2829819</identifier><identifier>PMID: 29994270</identifier><identifier>CODEN: ITNNAL</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Accuracy ; Artificial neural networks ; Computer vision ; Convolutional neural network (CNN) ; deep learning ; Distortion ; Error analysis ; Image processing ; Image quality ; image quality assessment (IQA) ; Machine learning ; Mathematical models ; Neural networks ; no-reference IQA (NR-IQA) ; Object recognition ; Quality assessment ; Quality control ; Reliability ; State of the art ; Training ; Visual system ; Visualization</subject><ispartof>IEEE transaction on neural networks and learning systems, 2019-01, Vol.30 (1), p.11-24</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019</rights><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c351t-9d686817f92aac183c01c389946b7001e9d79daf5ff7e92ddd90ebfbd0cadd8f3</citedby><cites>FETCH-LOGICAL-c351t-9d686817f92aac183c01c389946b7001e9d79daf5ff7e92ddd90ebfbd0cadd8f3</cites><orcidid>0000-0002-2435-9195 ; 0000-0001-9895-5347 ; 0000-0001-7759-1134</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8383698$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/8383698$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/29994270$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Kim, Jongyoo</creatorcontrib><creatorcontrib>Nguyen, Anh-Duc</creatorcontrib><creatorcontrib>Lee, Sanghoon</creatorcontrib><title>Deep CNN-Based Blind Image Quality Predictor</title><title>IEEE transaction on neural networks and learning systems</title><addtitle>TNNLS</addtitle><addtitle>IEEE Trans Neural Netw Learn Syst</addtitle><description>Image recognition based on convolutional neural networks (CNNs) has recently been shown to deliver the state-of-the-art performance in various areas of computer vision and image processing. Nevertheless, applying a deep CNN to no-reference image quality assessment (NR-IQA) remains a challenging task due to critical obstacles, i.e., the lack of a training database. In this paper, we propose a CNN-based NR-IQA framework that can effectively solve this problem. The proposed method-deep image quality assessor (DIQA)-separates the training of NR-IQA into two stages: 1) an objective distortion part and 2) a human visual system-related part. In the first stage, the CNN learns to predict the objective error map, and then the model learns to predict subjective score in the second stage. To complement the inaccuracy of the objective error map prediction on the homogeneous region, we also propose a reliability map. Two simple handcrafted features were additionally employed to further enhance the accuracy. In addition, we propose a way to visualize perceptual error maps to analyze what was learned by the deep CNN model. In the experiments, the DIQA yielded the state-of-the-art accuracy on the various databases.</description><subject>Accuracy</subject><subject>Artificial neural networks</subject><subject>Computer vision</subject><subject>Convolutional neural network (CNN)</subject><subject>deep learning</subject><subject>Distortion</subject><subject>Error analysis</subject><subject>Image processing</subject><subject>Image quality</subject><subject>image quality assessment (IQA)</subject><subject>Machine learning</subject><subject>Mathematical models</subject><subject>Neural networks</subject><subject>no-reference IQA (NR-IQA)</subject><subject>Object recognition</subject><subject>Quality assessment</subject><subject>Quality control</subject><subject>Reliability</subject><subject>State of the art</subject><subject>Training</subject><subject>Visual system</subject><subject>Visualization</subject><issn>2162-237X</issn><issn>2162-2388</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpdkF1LwzAUhoMobsz9AQUpeOOFnflY0-TSza_BqIoTvAtZciId7TqT9mL_3tbNXZibE8hz3pPzIHRO8IgQLG8XWTZ_H1FMxIgKKgWRR6hPCacxZUIcH-7pZw8NQ1jh9nCc8LE8RT0qpRzTFPfRzT3AJppmWTzRAWw0KfK1jWal_oLordFFXm-jVw82N3Xlz9CJ00WA4b4O0Mfjw2L6HM9fnmbTu3lsWELqWFouuCCpk1RrQwQzmBgm2pF8mWJMQNpUWu0S51KQ1ForMSzd0mKjrRWODdD1Lnfjq-8GQq3KPBgoCr2GqgmKYi5YF5i06NU_dFU1ft3-TlGSdGsSJlqK7ijjqxA8OLXxean9VhGsOp3qV6fqdKq9zrbpch_dLEuwh5Y_eS1wsQNyADg8i3Yel4L9AFDsduQ</recordid><startdate>201901</startdate><enddate>201901</enddate><creator>Kim, Jongyoo</creator><creator>Nguyen, Anh-Duc</creator><creator>Lee, Sanghoon</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7QF</scope><scope>7QO</scope><scope>7QP</scope><scope>7QQ</scope><scope>7QR</scope><scope>7SC</scope><scope>7SE</scope><scope>7SP</scope><scope>7SR</scope><scope>7TA</scope><scope>7TB</scope><scope>7TK</scope><scope>7U5</scope><scope>8BQ</scope><scope>8FD</scope><scope>F28</scope><scope>FR3</scope><scope>H8D</scope><scope>JG9</scope><scope>JQ2</scope><scope>KR7</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>P64</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-2435-9195</orcidid><orcidid>https://orcid.org/0000-0001-9895-5347</orcidid><orcidid>https://orcid.org/0000-0001-7759-1134</orcidid></search><sort><creationdate>201901</creationdate><title>Deep CNN-Based Blind Image Quality Predictor</title><author>Kim, Jongyoo ; Nguyen, Anh-Duc ; Lee, Sanghoon</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c351t-9d686817f92aac183c01c389946b7001e9d79daf5ff7e92ddd90ebfbd0cadd8f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Accuracy</topic><topic>Artificial neural networks</topic><topic>Computer vision</topic><topic>Convolutional neural network (CNN)</topic><topic>deep learning</topic><topic>Distortion</topic><topic>Error analysis</topic><topic>Image processing</topic><topic>Image quality</topic><topic>image quality assessment (IQA)</topic><topic>Machine learning</topic><topic>Mathematical models</topic><topic>Neural networks</topic><topic>no-reference IQA (NR-IQA)</topic><topic>Object recognition</topic><topic>Quality assessment</topic><topic>Quality control</topic><topic>Reliability</topic><topic>State of the art</topic><topic>Training</topic><topic>Visual system</topic><topic>Visualization</topic><toplevel>online_resources</toplevel><creatorcontrib>Kim, Jongyoo</creatorcontrib><creatorcontrib>Nguyen, Anh-Duc</creatorcontrib><creatorcontrib>Lee, Sanghoon</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Aluminium Industry Abstracts</collection><collection>Biotechnology Research Abstracts</collection><collection>Calcium & Calcified Tissue Abstracts</collection><collection>Ceramic Abstracts</collection><collection>Chemoreception Abstracts</collection><collection>Computer and Information Systems Abstracts</collection><collection>Corrosion Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>Materials Business File</collection><collection>Mechanical & Transportation Engineering Abstracts</collection><collection>Neurosciences Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>ANTE: Abstracts in New Technology & Engineering</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Civil Engineering Abstracts</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transaction on neural networks and learning systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Kim, Jongyoo</au><au>Nguyen, Anh-Duc</au><au>Lee, Sanghoon</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Deep CNN-Based Blind Image Quality Predictor</atitle><jtitle>IEEE transaction on neural networks and learning systems</jtitle><stitle>TNNLS</stitle><addtitle>IEEE Trans Neural Netw Learn Syst</addtitle><date>2019-01</date><risdate>2019</risdate><volume>30</volume><issue>1</issue><spage>11</spage><epage>24</epage><pages>11-24</pages><issn>2162-237X</issn><eissn>2162-2388</eissn><coden>ITNNAL</coden><abstract>Image recognition based on convolutional neural networks (CNNs) has recently been shown to deliver the state-of-the-art performance in various areas of computer vision and image processing. Nevertheless, applying a deep CNN to no-reference image quality assessment (NR-IQA) remains a challenging task due to critical obstacles, i.e., the lack of a training database. In this paper, we propose a CNN-based NR-IQA framework that can effectively solve this problem. The proposed method-deep image quality assessor (DIQA)-separates the training of NR-IQA into two stages: 1) an objective distortion part and 2) a human visual system-related part. In the first stage, the CNN learns to predict the objective error map, and then the model learns to predict subjective score in the second stage. To complement the inaccuracy of the objective error map prediction on the homogeneous region, we also propose a reliability map. Two simple handcrafted features were additionally employed to further enhance the accuracy. In addition, we propose a way to visualize perceptual error maps to analyze what was learned by the deep CNN model. In the experiments, the DIQA yielded the state-of-the-art accuracy on the various databases.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>29994270</pmid><doi>10.1109/TNNLS.2018.2829819</doi><tpages>14</tpages><orcidid>https://orcid.org/0000-0002-2435-9195</orcidid><orcidid>https://orcid.org/0000-0001-9895-5347</orcidid><orcidid>https://orcid.org/0000-0001-7759-1134</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 2162-237X |
ispartof | IEEE transaction on neural networks and learning systems, 2019-01, Vol.30 (1), p.11-24 |
issn | 2162-237X 2162-2388 |
language | eng |
recordid | cdi_proquest_journals_2159994138 |
source | IEEE Electronic Library (IEL) |
subjects | Accuracy Artificial neural networks Computer vision Convolutional neural network (CNN) deep learning Distortion Error analysis Image processing Image quality image quality assessment (IQA) Machine learning Mathematical models Neural networks no-reference IQA (NR-IQA) Object recognition Quality assessment Quality control Reliability State of the art Training Visual system Visualization |
title | Deep CNN-Based Blind Image Quality Predictor |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-08T09%3A24%3A54IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Deep%20CNN-Based%20Blind%20Image%20Quality%20Predictor&rft.jtitle=IEEE%20transaction%20on%20neural%20networks%20and%20learning%20systems&rft.au=Kim,%20Jongyoo&rft.date=2019-01&rft.volume=30&rft.issue=1&rft.spage=11&rft.epage=24&rft.pages=11-24&rft.issn=2162-237X&rft.eissn=2162-2388&rft.coden=ITNNAL&rft_id=info:doi/10.1109/TNNLS.2018.2829819&rft_dat=%3Cproquest_RIE%3E2159994138%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2159994138&rft_id=info:pmid/29994270&rft_ieee_id=8383698&rfr_iscdi=true |