Saliency Detection for Stereoscopic Images

Many saliency detection models for 2D images have been proposed for various multimedia processing applications during the past decades. Currently, the emerging applications of stereoscopic display require new saliency detection models for salient region extraction. Different from saliency detection...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing 2014-06, Vol.23 (6), p.2625-2636
Hauptverfasser: Yuming Fang, Junle Wang, Narwaria, Manish, Le Callet, Patrick, Weisi Lin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 2636
container_issue 6
container_start_page 2625
container_title IEEE transactions on image processing
container_volume 23
creator Yuming Fang
Junle Wang
Narwaria, Manish
Le Callet, Patrick
Weisi Lin
description Many saliency detection models for 2D images have been proposed for various multimedia processing applications during the past decades. Currently, the emerging applications of stereoscopic display require new saliency detection models for salient region extraction. Different from saliency detection for 2D images, the depth feature has to be taken into account in saliency detection for stereoscopic images. In this paper, we propose a novel stereoscopic saliency detection framework based on the feature contrast of color, luminance, texture, and depth. Four types of features, namely color, luminance, texture, and depth, are extracted from discrete cosine transform coefficients for feature contrast calculation. A Gaussian model of the spatial distance between image patches is adopted for consideration of local and global contrast calculation. Then, a new fusion method is designed to combine the feature maps to obtain the final saliency map for stereoscopic images. In addition, we adopt the center bias factor and human visual acuity, the important characteristics of the human visual system, to enhance the final saliency map for stereoscopic images. Experimental results on eye tracking databases show the superior performance of the proposed model over other existing methods.
doi_str_mv 10.1109/TIP.2014.2305100
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_miscellaneous_1525762782</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>6733288</ieee_id><sourcerecordid>1669905909</sourcerecordid><originalsourceid>FETCH-LOGICAL-c486t-c18ba3487be52f808d87170568af2a4508629455ffb2031b0a95032ffcfc854a3</originalsourceid><addsrcrecordid>eNqF0d9LHDEQB_BQlPqjvhcKclCEKuw5-THZ5FFsqwcHFe58DtmY2JW9zTXZE_zvzXHXE3zxKSH5zDDDl5CvFMaUgr6cT-7GDKgYMw5IAT6RQ6oFrQAE2yt3wLqqqdAH5CjnJygSqfxMDphQnKHGQ3Ixs13re_cy-ukH74Y29qMQ02g2-ORjdnHZutFkYR99_kL2g-2yP9mex-T-96_59W01_XMzub6aVk4oOVSOqsZyoerGIwsK1IOqaQ0olQ3MCgQlmRaIITQMOG3AagTOQnDBKRSWH5PzTd-_tjPL1C5sejHRtub2amrWb1D20lrJZ1rsj41dpvhv5fNgFm12vuts7-MqGyql1kWD_pgiw1qyWrFCv7-jT3GV-rJ0UQK1FChFUbBRLsWckw-7YSmYdTymxGPW8ZhtPKXkdNt41Sz8w67gfx4FnG2Bzc52IdnetfnNKWRK0vXe3zau9d7vvmXNOVOKvwI5vJuC</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1545964564</pqid></control><display><type>article</type><title>Saliency Detection for Stereoscopic Images</title><source>IEEE Electronic Library (IEL)</source><creator>Yuming Fang ; Junle Wang ; Narwaria, Manish ; Le Callet, Patrick ; Weisi Lin</creator><creatorcontrib>Yuming Fang ; Junle Wang ; Narwaria, Manish ; Le Callet, Patrick ; Weisi Lin</creatorcontrib><description>Many saliency detection models for 2D images have been proposed for various multimedia processing applications during the past decades. Currently, the emerging applications of stereoscopic display require new saliency detection models for salient region extraction. Different from saliency detection for 2D images, the depth feature has to be taken into account in saliency detection for stereoscopic images. In this paper, we propose a novel stereoscopic saliency detection framework based on the feature contrast of color, luminance, texture, and depth. Four types of features, namely color, luminance, texture, and depth, are extracted from discrete cosine transform coefficients for feature contrast calculation. A Gaussian model of the spatial distance between image patches is adopted for consideration of local and global contrast calculation. Then, a new fusion method is designed to combine the feature maps to obtain the final saliency map for stereoscopic images. In addition, we adopt the center bias factor and human visual acuity, the important characteristics of the human visual system, to enhance the final saliency map for stereoscopic images. Experimental results on eye tracking databases show the superior performance of the proposed model over other existing methods.</description><identifier>ISSN: 1057-7149</identifier><identifier>EISSN: 1941-0042</identifier><identifier>DOI: 10.1109/TIP.2014.2305100</identifier><identifier>PMID: 24832595</identifier><identifier>CODEN: IIPRE4</identifier><language>eng</language><publisher>New York, NY: IEEE</publisher><subject>3D image ; Algorithms ; Applied sciences ; Artificial Intelligence ; Computational modeling ; Computer Science ; Detection, estimation, filtering, equalization, prediction ; Engineering Sciences ; Exact sciences and technology ; Feature extraction ; Human ; human visual acuity ; Image color analysis ; Image detection ; Image Enhancement - methods ; Image Interpretation, Computer-Assisted - methods ; Image processing ; Imaging, Three-Dimensional - methods ; Information, signal and communications theory ; Luminance ; Mathematical models ; Pattern Recognition, Automated - methods ; Photogrammetry - methods ; Reproducibility of Results ; Sensitivity and Specificity ; Signal and communications theory ; Signal and Image Processing ; Signal processing ; Signal, noise ; Solid modeling ; Stereo image processing ; Stereoscopic ; Stereoscopic image ; stereoscopic saliency detection ; Subtraction Technique ; Surface layer ; Telecommunications and information theory ; Texture ; Three-dimensional displays ; Two dimensional ; visual attention ; Visualization</subject><ispartof>IEEE transactions on image processing, 2014-06, Vol.23 (6), p.2625-2636</ispartof><rights>2015 INIST-CNRS</rights><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) Jun 2014</rights><rights>Distributed under a Creative Commons Attribution 4.0 International License</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c486t-c18ba3487be52f808d87170568af2a4508629455ffb2031b0a95032ffcfc854a3</citedby><cites>FETCH-LOGICAL-c486t-c18ba3487be52f808d87170568af2a4508629455ffb2031b0a95032ffcfc854a3</cites><orcidid>0000-0002-2143-7063 ; 0000-0002-6946-3586</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/6733288$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>230,314,776,780,792,881,27903,27904,54736</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/6733288$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttp://pascal-francis.inist.fr/vibad/index.php?action=getRecordDetail&amp;idt=28528611$$DView record in Pascal Francis$$Hfree_for_read</backlink><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/24832595$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink><backlink>$$Uhttps://hal.science/hal-01059986$$DView record in HAL$$Hfree_for_read</backlink></links><search><creatorcontrib>Yuming Fang</creatorcontrib><creatorcontrib>Junle Wang</creatorcontrib><creatorcontrib>Narwaria, Manish</creatorcontrib><creatorcontrib>Le Callet, Patrick</creatorcontrib><creatorcontrib>Weisi Lin</creatorcontrib><title>Saliency Detection for Stereoscopic Images</title><title>IEEE transactions on image processing</title><addtitle>TIP</addtitle><addtitle>IEEE Trans Image Process</addtitle><description>Many saliency detection models for 2D images have been proposed for various multimedia processing applications during the past decades. Currently, the emerging applications of stereoscopic display require new saliency detection models for salient region extraction. Different from saliency detection for 2D images, the depth feature has to be taken into account in saliency detection for stereoscopic images. In this paper, we propose a novel stereoscopic saliency detection framework based on the feature contrast of color, luminance, texture, and depth. Four types of features, namely color, luminance, texture, and depth, are extracted from discrete cosine transform coefficients for feature contrast calculation. A Gaussian model of the spatial distance between image patches is adopted for consideration of local and global contrast calculation. Then, a new fusion method is designed to combine the feature maps to obtain the final saliency map for stereoscopic images. In addition, we adopt the center bias factor and human visual acuity, the important characteristics of the human visual system, to enhance the final saliency map for stereoscopic images. Experimental results on eye tracking databases show the superior performance of the proposed model over other existing methods.</description><subject>3D image</subject><subject>Algorithms</subject><subject>Applied sciences</subject><subject>Artificial Intelligence</subject><subject>Computational modeling</subject><subject>Computer Science</subject><subject>Detection, estimation, filtering, equalization, prediction</subject><subject>Engineering Sciences</subject><subject>Exact sciences and technology</subject><subject>Feature extraction</subject><subject>Human</subject><subject>human visual acuity</subject><subject>Image color analysis</subject><subject>Image detection</subject><subject>Image Enhancement - methods</subject><subject>Image Interpretation, Computer-Assisted - methods</subject><subject>Image processing</subject><subject>Imaging, Three-Dimensional - methods</subject><subject>Information, signal and communications theory</subject><subject>Luminance</subject><subject>Mathematical models</subject><subject>Pattern Recognition, Automated - methods</subject><subject>Photogrammetry - methods</subject><subject>Reproducibility of Results</subject><subject>Sensitivity and Specificity</subject><subject>Signal and communications theory</subject><subject>Signal and Image Processing</subject><subject>Signal processing</subject><subject>Signal, noise</subject><subject>Solid modeling</subject><subject>Stereo image processing</subject><subject>Stereoscopic</subject><subject>Stereoscopic image</subject><subject>stereoscopic saliency detection</subject><subject>Subtraction Technique</subject><subject>Surface layer</subject><subject>Telecommunications and information theory</subject><subject>Texture</subject><subject>Three-dimensional displays</subject><subject>Two dimensional</subject><subject>visual attention</subject><subject>Visualization</subject><issn>1057-7149</issn><issn>1941-0042</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2014</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><sourceid>EIF</sourceid><recordid>eNqF0d9LHDEQB_BQlPqjvhcKclCEKuw5-THZ5FFsqwcHFe58DtmY2JW9zTXZE_zvzXHXE3zxKSH5zDDDl5CvFMaUgr6cT-7GDKgYMw5IAT6RQ6oFrQAE2yt3wLqqqdAH5CjnJygSqfxMDphQnKHGQ3Ixs13re_cy-ukH74Y29qMQ02g2-ORjdnHZutFkYR99_kL2g-2yP9mex-T-96_59W01_XMzub6aVk4oOVSOqsZyoerGIwsK1IOqaQ0olQ3MCgQlmRaIITQMOG3AagTOQnDBKRSWH5PzTd-_tjPL1C5sejHRtub2amrWb1D20lrJZ1rsj41dpvhv5fNgFm12vuts7-MqGyql1kWD_pgiw1qyWrFCv7-jT3GV-rJ0UQK1FChFUbBRLsWckw-7YSmYdTymxGPW8ZhtPKXkdNt41Sz8w67gfx4FnG2Bzc52IdnetfnNKWRK0vXe3zau9d7vvmXNOVOKvwI5vJuC</recordid><startdate>20140601</startdate><enddate>20140601</enddate><creator>Yuming Fang</creator><creator>Junle Wang</creator><creator>Narwaria, Manish</creator><creator>Le Callet, Patrick</creator><creator>Weisi Lin</creator><general>IEEE</general><general>Institute of Electrical and Electronics Engineers</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>IQODW</scope><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7X8</scope><scope>F28</scope><scope>FR3</scope><scope>1XC</scope><scope>VOOES</scope><orcidid>https://orcid.org/0000-0002-2143-7063</orcidid><orcidid>https://orcid.org/0000-0002-6946-3586</orcidid></search><sort><creationdate>20140601</creationdate><title>Saliency Detection for Stereoscopic Images</title><author>Yuming Fang ; Junle Wang ; Narwaria, Manish ; Le Callet, Patrick ; Weisi Lin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c486t-c18ba3487be52f808d87170568af2a4508629455ffb2031b0a95032ffcfc854a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2014</creationdate><topic>3D image</topic><topic>Algorithms</topic><topic>Applied sciences</topic><topic>Artificial Intelligence</topic><topic>Computational modeling</topic><topic>Computer Science</topic><topic>Detection, estimation, filtering, equalization, prediction</topic><topic>Engineering Sciences</topic><topic>Exact sciences and technology</topic><topic>Feature extraction</topic><topic>Human</topic><topic>human visual acuity</topic><topic>Image color analysis</topic><topic>Image detection</topic><topic>Image Enhancement - methods</topic><topic>Image Interpretation, Computer-Assisted - methods</topic><topic>Image processing</topic><topic>Imaging, Three-Dimensional - methods</topic><topic>Information, signal and communications theory</topic><topic>Luminance</topic><topic>Mathematical models</topic><topic>Pattern Recognition, Automated - methods</topic><topic>Photogrammetry - methods</topic><topic>Reproducibility of Results</topic><topic>Sensitivity and Specificity</topic><topic>Signal and communications theory</topic><topic>Signal and Image Processing</topic><topic>Signal processing</topic><topic>Signal, noise</topic><topic>Solid modeling</topic><topic>Stereo image processing</topic><topic>Stereoscopic</topic><topic>Stereoscopic image</topic><topic>stereoscopic saliency detection</topic><topic>Subtraction Technique</topic><topic>Surface layer</topic><topic>Telecommunications and information theory</topic><topic>Texture</topic><topic>Three-dimensional displays</topic><topic>Two dimensional</topic><topic>visual attention</topic><topic>Visualization</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Yuming Fang</creatorcontrib><creatorcontrib>Junle Wang</creatorcontrib><creatorcontrib>Narwaria, Manish</creatorcontrib><creatorcontrib>Le Callet, Patrick</creatorcontrib><creatorcontrib>Weisi Lin</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>Pascal-Francis</collection><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>MEDLINE - Academic</collection><collection>ANTE: Abstracts in New Technology &amp; Engineering</collection><collection>Engineering Research Database</collection><collection>Hyper Article en Ligne (HAL)</collection><collection>Hyper Article en Ligne (HAL) (Open Access)</collection><jtitle>IEEE transactions on image processing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Yuming Fang</au><au>Junle Wang</au><au>Narwaria, Manish</au><au>Le Callet, Patrick</au><au>Weisi Lin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Saliency Detection for Stereoscopic Images</atitle><jtitle>IEEE transactions on image processing</jtitle><stitle>TIP</stitle><addtitle>IEEE Trans Image Process</addtitle><date>2014-06-01</date><risdate>2014</risdate><volume>23</volume><issue>6</issue><spage>2625</spage><epage>2636</epage><pages>2625-2636</pages><issn>1057-7149</issn><eissn>1941-0042</eissn><coden>IIPRE4</coden><abstract>Many saliency detection models for 2D images have been proposed for various multimedia processing applications during the past decades. Currently, the emerging applications of stereoscopic display require new saliency detection models for salient region extraction. Different from saliency detection for 2D images, the depth feature has to be taken into account in saliency detection for stereoscopic images. In this paper, we propose a novel stereoscopic saliency detection framework based on the feature contrast of color, luminance, texture, and depth. Four types of features, namely color, luminance, texture, and depth, are extracted from discrete cosine transform coefficients for feature contrast calculation. A Gaussian model of the spatial distance between image patches is adopted for consideration of local and global contrast calculation. Then, a new fusion method is designed to combine the feature maps to obtain the final saliency map for stereoscopic images. In addition, we adopt the center bias factor and human visual acuity, the important characteristics of the human visual system, to enhance the final saliency map for stereoscopic images. Experimental results on eye tracking databases show the superior performance of the proposed model over other existing methods.</abstract><cop>New York, NY</cop><pub>IEEE</pub><pmid>24832595</pmid><doi>10.1109/TIP.2014.2305100</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0002-2143-7063</orcidid><orcidid>https://orcid.org/0000-0002-6946-3586</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1057-7149
ispartof IEEE transactions on image processing, 2014-06, Vol.23 (6), p.2625-2636
issn 1057-7149
1941-0042
language eng
recordid cdi_proquest_miscellaneous_1525762782
source IEEE Electronic Library (IEL)
subjects 3D image
Algorithms
Applied sciences
Artificial Intelligence
Computational modeling
Computer Science
Detection, estimation, filtering, equalization, prediction
Engineering Sciences
Exact sciences and technology
Feature extraction
Human
human visual acuity
Image color analysis
Image detection
Image Enhancement - methods
Image Interpretation, Computer-Assisted - methods
Image processing
Imaging, Three-Dimensional - methods
Information, signal and communications theory
Luminance
Mathematical models
Pattern Recognition, Automated - methods
Photogrammetry - methods
Reproducibility of Results
Sensitivity and Specificity
Signal and communications theory
Signal and Image Processing
Signal processing
Signal, noise
Solid modeling
Stereo image processing
Stereoscopic
Stereoscopic image
stereoscopic saliency detection
Subtraction Technique
Surface layer
Telecommunications and information theory
Texture
Three-dimensional displays
Two dimensional
visual attention
Visualization
title Saliency Detection for Stereoscopic Images
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-27T00%3A39%3A26IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Saliency%20Detection%20for%20Stereoscopic%20Images&rft.jtitle=IEEE%20transactions%20on%20image%20processing&rft.au=Yuming%20Fang&rft.date=2014-06-01&rft.volume=23&rft.issue=6&rft.spage=2625&rft.epage=2636&rft.pages=2625-2636&rft.issn=1057-7149&rft.eissn=1941-0042&rft.coden=IIPRE4&rft_id=info:doi/10.1109/TIP.2014.2305100&rft_dat=%3Cproquest_RIE%3E1669905909%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1545964564&rft_id=info:pmid/24832595&rft_ieee_id=6733288&rfr_iscdi=true