Image Manipulation Quality Assessment
Image quality assessment (IQA) and its computational models play a vital role in modern computer vision applications. Research has traditionally focused on signal distortions arising during image compression and transmission, and their impact on perceived image quality. However, little attention is...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on circuits and systems for video technology 2024-11, p.1-1 |
---|---|
Hauptverfasser: | , , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 1 |
---|---|
container_issue | |
container_start_page | 1 |
container_title | IEEE transactions on circuits and systems for video technology |
container_volume | |
creator | Wu, Xinbo Lou, Jianxun Wu, Yingying Liu, Wanan Rosin, Paul L. Colombo, Gualtiero Allen, Stuart Whitaker, Roger Liu, Hantao |
description | Image quality assessment (IQA) and its computational models play a vital role in modern computer vision applications. Research has traditionally focused on signal distortions arising during image compression and transmission, and their impact on perceived image quality. However, little attention is paid to image manipulation that alters an image using various filters. With the prevalence of image manipulation in real-life scenarios, it is critical to understand how humans perceive filter-altered images and to develop reliable IQA models capable of automatically assessing the quality of filtered images. In this paper, we build a new IQA database for filter-altered images, comprised of 360 images manipulated by various filters. To ensure the subjective IQA faithfully reflects human visual perception, we conduct a fully-controlled psychovisual experiment. Building upon the ground truth, we propose an innovative deep learning-based no-reference IQA (NR-IQA) model named IMQA that can accurately predict the perceived quality of filter-altered images. This model involves constructing an image filtering-aware module to learn discriminatory features for filter-altered images; and fuses these features with the representations generated by an image quality-aware module. Experimental results demonstrate the superior performance of the proposed IMQA model. |
doi_str_mv | 10.1109/TCSVT.2024.3504854 |
format | Article |
fullrecord | <record><control><sourceid>crossref_RIE</sourceid><recordid>TN_cdi_ieee_primary_10764798</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10764798</ieee_id><sourcerecordid>10_1109_TCSVT_2024_3504854</sourcerecordid><originalsourceid>FETCH-LOGICAL-c648-1f00822e7d2463eb0d915e6cbaf7bdb71405fa8ec8bd4b0e5c3e17a94f9b97663</originalsourceid><addsrcrecordid>eNpNz09Lw0AQh-FFFKzVLyAecvGYOLPZfzmWoLVQETF4XXaTWYkkacmmh357W9uDp5nL-4OHsXuEDBGKp6r8_KoyDlxkuQRhpLhgM5TSpJyDvDz8IDE1HOU1u4nxBwCFEXrGHle9-6bkzQ3tdte5qd0MycfOde20TxYxUow9DdMtuwqui3R3vnNWvTxX5Wu6fl-uysU6rZUwKQYAwznphguVk4emQEmq9i5o33iNAmRwhmrjG-GBZJ0TaleIUPhCK5XPGT_N1uMmxpGC3Y5t78a9RbBHp_1z2qPTnp2H6OEUtUT0L9BK6MLkv8jPTxE</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Image Manipulation Quality Assessment</title><source>IEEE Electronic Library (IEL)</source><creator>Wu, Xinbo ; Lou, Jianxun ; Wu, Yingying ; Liu, Wanan ; Rosin, Paul L. ; Colombo, Gualtiero ; Allen, Stuart ; Whitaker, Roger ; Liu, Hantao</creator><creatorcontrib>Wu, Xinbo ; Lou, Jianxun ; Wu, Yingying ; Liu, Wanan ; Rosin, Paul L. ; Colombo, Gualtiero ; Allen, Stuart ; Whitaker, Roger ; Liu, Hantao</creatorcontrib><description>Image quality assessment (IQA) and its computational models play a vital role in modern computer vision applications. Research has traditionally focused on signal distortions arising during image compression and transmission, and their impact on perceived image quality. However, little attention is paid to image manipulation that alters an image using various filters. With the prevalence of image manipulation in real-life scenarios, it is critical to understand how humans perceive filter-altered images and to develop reliable IQA models capable of automatically assessing the quality of filtered images. In this paper, we build a new IQA database for filter-altered images, comprised of 360 images manipulated by various filters. To ensure the subjective IQA faithfully reflects human visual perception, we conduct a fully-controlled psychovisual experiment. Building upon the ground truth, we propose an innovative deep learning-based no-reference IQA (NR-IQA) model named IMQA that can accurately predict the perceived quality of filter-altered images. This model involves constructing an image filtering-aware module to learn discriminatory features for filter-altered images; and fuses these features with the representations generated by an image quality-aware module. Experimental results demonstrate the superior performance of the proposed IMQA model.</description><identifier>ISSN: 1051-8215</identifier><identifier>EISSN: 1558-2205</identifier><identifier>DOI: 10.1109/TCSVT.2024.3504854</identifier><identifier>CODEN: ITCTEM</identifier><language>eng</language><publisher>IEEE</publisher><subject>Circuits and systems ; Computational modeling ; deep learning ; Distortion ; filter-altered ; Filtering algorithms ; Image color analysis ; image manipulation ; Image quality ; Image quality assessment ; Monitoring ; perception ; Quality assessment ; Visualization</subject><ispartof>IEEE transactions on circuits and systems for video technology, 2024-11, p.1-1</ispartof><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><orcidid>0000-0002-2982-595X ; 0000-0003-4544-3481 ; 0000-0001-6146-2207 ; 0000-0001-5883-8517 ; 0000-0002-4965-3884 ; 0000-0002-8473-1913</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10764798$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10764798$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Wu, Xinbo</creatorcontrib><creatorcontrib>Lou, Jianxun</creatorcontrib><creatorcontrib>Wu, Yingying</creatorcontrib><creatorcontrib>Liu, Wanan</creatorcontrib><creatorcontrib>Rosin, Paul L.</creatorcontrib><creatorcontrib>Colombo, Gualtiero</creatorcontrib><creatorcontrib>Allen, Stuart</creatorcontrib><creatorcontrib>Whitaker, Roger</creatorcontrib><creatorcontrib>Liu, Hantao</creatorcontrib><title>Image Manipulation Quality Assessment</title><title>IEEE transactions on circuits and systems for video technology</title><addtitle>TCSVT</addtitle><description>Image quality assessment (IQA) and its computational models play a vital role in modern computer vision applications. Research has traditionally focused on signal distortions arising during image compression and transmission, and their impact on perceived image quality. However, little attention is paid to image manipulation that alters an image using various filters. With the prevalence of image manipulation in real-life scenarios, it is critical to understand how humans perceive filter-altered images and to develop reliable IQA models capable of automatically assessing the quality of filtered images. In this paper, we build a new IQA database for filter-altered images, comprised of 360 images manipulated by various filters. To ensure the subjective IQA faithfully reflects human visual perception, we conduct a fully-controlled psychovisual experiment. Building upon the ground truth, we propose an innovative deep learning-based no-reference IQA (NR-IQA) model named IMQA that can accurately predict the perceived quality of filter-altered images. This model involves constructing an image filtering-aware module to learn discriminatory features for filter-altered images; and fuses these features with the representations generated by an image quality-aware module. Experimental results demonstrate the superior performance of the proposed IMQA model.</description><subject>Circuits and systems</subject><subject>Computational modeling</subject><subject>deep learning</subject><subject>Distortion</subject><subject>filter-altered</subject><subject>Filtering algorithms</subject><subject>Image color analysis</subject><subject>image manipulation</subject><subject>Image quality</subject><subject>Image quality assessment</subject><subject>Monitoring</subject><subject>perception</subject><subject>Quality assessment</subject><subject>Visualization</subject><issn>1051-8215</issn><issn>1558-2205</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNz09Lw0AQh-FFFKzVLyAecvGYOLPZfzmWoLVQETF4XXaTWYkkacmmh357W9uDp5nL-4OHsXuEDBGKp6r8_KoyDlxkuQRhpLhgM5TSpJyDvDz8IDE1HOU1u4nxBwCFEXrGHle9-6bkzQ3tdte5qd0MycfOde20TxYxUow9DdMtuwqui3R3vnNWvTxX5Wu6fl-uysU6rZUwKQYAwznphguVk4emQEmq9i5o33iNAmRwhmrjG-GBZJ0TaleIUPhCK5XPGT_N1uMmxpGC3Y5t78a9RbBHp_1z2qPTnp2H6OEUtUT0L9BK6MLkv8jPTxE</recordid><startdate>20241121</startdate><enddate>20241121</enddate><creator>Wu, Xinbo</creator><creator>Lou, Jianxun</creator><creator>Wu, Yingying</creator><creator>Liu, Wanan</creator><creator>Rosin, Paul L.</creator><creator>Colombo, Gualtiero</creator><creator>Allen, Stuart</creator><creator>Whitaker, Roger</creator><creator>Liu, Hantao</creator><general>IEEE</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0002-2982-595X</orcidid><orcidid>https://orcid.org/0000-0003-4544-3481</orcidid><orcidid>https://orcid.org/0000-0001-6146-2207</orcidid><orcidid>https://orcid.org/0000-0001-5883-8517</orcidid><orcidid>https://orcid.org/0000-0002-4965-3884</orcidid><orcidid>https://orcid.org/0000-0002-8473-1913</orcidid></search><sort><creationdate>20241121</creationdate><title>Image Manipulation Quality Assessment</title><author>Wu, Xinbo ; Lou, Jianxun ; Wu, Yingying ; Liu, Wanan ; Rosin, Paul L. ; Colombo, Gualtiero ; Allen, Stuart ; Whitaker, Roger ; Liu, Hantao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c648-1f00822e7d2463eb0d915e6cbaf7bdb71405fa8ec8bd4b0e5c3e17a94f9b97663</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Circuits and systems</topic><topic>Computational modeling</topic><topic>deep learning</topic><topic>Distortion</topic><topic>filter-altered</topic><topic>Filtering algorithms</topic><topic>Image color analysis</topic><topic>image manipulation</topic><topic>Image quality</topic><topic>Image quality assessment</topic><topic>Monitoring</topic><topic>perception</topic><topic>Quality assessment</topic><topic>Visualization</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Wu, Xinbo</creatorcontrib><creatorcontrib>Lou, Jianxun</creatorcontrib><creatorcontrib>Wu, Yingying</creatorcontrib><creatorcontrib>Liu, Wanan</creatorcontrib><creatorcontrib>Rosin, Paul L.</creatorcontrib><creatorcontrib>Colombo, Gualtiero</creatorcontrib><creatorcontrib>Allen, Stuart</creatorcontrib><creatorcontrib>Whitaker, Roger</creatorcontrib><creatorcontrib>Liu, Hantao</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><jtitle>IEEE transactions on circuits and systems for video technology</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Wu, Xinbo</au><au>Lou, Jianxun</au><au>Wu, Yingying</au><au>Liu, Wanan</au><au>Rosin, Paul L.</au><au>Colombo, Gualtiero</au><au>Allen, Stuart</au><au>Whitaker, Roger</au><au>Liu, Hantao</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Image Manipulation Quality Assessment</atitle><jtitle>IEEE transactions on circuits and systems for video technology</jtitle><stitle>TCSVT</stitle><date>2024-11-21</date><risdate>2024</risdate><spage>1</spage><epage>1</epage><pages>1-1</pages><issn>1051-8215</issn><eissn>1558-2205</eissn><coden>ITCTEM</coden><abstract>Image quality assessment (IQA) and its computational models play a vital role in modern computer vision applications. Research has traditionally focused on signal distortions arising during image compression and transmission, and their impact on perceived image quality. However, little attention is paid to image manipulation that alters an image using various filters. With the prevalence of image manipulation in real-life scenarios, it is critical to understand how humans perceive filter-altered images and to develop reliable IQA models capable of automatically assessing the quality of filtered images. In this paper, we build a new IQA database for filter-altered images, comprised of 360 images manipulated by various filters. To ensure the subjective IQA faithfully reflects human visual perception, we conduct a fully-controlled psychovisual experiment. Building upon the ground truth, we propose an innovative deep learning-based no-reference IQA (NR-IQA) model named IMQA that can accurately predict the perceived quality of filter-altered images. This model involves constructing an image filtering-aware module to learn discriminatory features for filter-altered images; and fuses these features with the representations generated by an image quality-aware module. Experimental results demonstrate the superior performance of the proposed IMQA model.</abstract><pub>IEEE</pub><doi>10.1109/TCSVT.2024.3504854</doi><tpages>1</tpages><orcidid>https://orcid.org/0000-0002-2982-595X</orcidid><orcidid>https://orcid.org/0000-0003-4544-3481</orcidid><orcidid>https://orcid.org/0000-0001-6146-2207</orcidid><orcidid>https://orcid.org/0000-0001-5883-8517</orcidid><orcidid>https://orcid.org/0000-0002-4965-3884</orcidid><orcidid>https://orcid.org/0000-0002-8473-1913</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1051-8215 |
ispartof | IEEE transactions on circuits and systems for video technology, 2024-11, p.1-1 |
issn | 1051-8215 1558-2205 |
language | eng |
recordid | cdi_ieee_primary_10764798 |
source | IEEE Electronic Library (IEL) |
subjects | Circuits and systems Computational modeling deep learning Distortion filter-altered Filtering algorithms Image color analysis image manipulation Image quality Image quality assessment Monitoring perception Quality assessment Visualization |
title | Image Manipulation Quality Assessment |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-25T17%3A51%3A32IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Image%20Manipulation%20Quality%20Assessment&rft.jtitle=IEEE%20transactions%20on%20circuits%20and%20systems%20for%20video%20technology&rft.au=Wu,%20Xinbo&rft.date=2024-11-21&rft.spage=1&rft.epage=1&rft.pages=1-1&rft.issn=1051-8215&rft.eissn=1558-2205&rft.coden=ITCTEM&rft_id=info:doi/10.1109/TCSVT.2024.3504854&rft_dat=%3Ccrossref_RIE%3E10_1109_TCSVT_2024_3504854%3C/crossref_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10764798&rfr_iscdi=true |