Depth Map Estimation Using Defocus and Motion Cues

Significant recent developments in 3D display technology have focused on techniques for converting 2D media into 3D. Depth map is an integral part of 2D-to-3D conversion. Combining multiple depth cues results in a more accurate depth map as it compensates for the errors caused by one depth cue as we...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on circuits and systems for video technology 2019-05, Vol.29 (5), p.1365-1379
Hauptverfasser: Kumar, Himanshu, Yadav, Ajeet Singh, Gupta, Sumana, Venkatesh, K. S.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 1379
container_issue 5
container_start_page 1365
container_title IEEE transactions on circuits and systems for video technology
container_volume 29
creator Kumar, Himanshu
Yadav, Ajeet Singh
Gupta, Sumana
Venkatesh, K. S.
description Significant recent developments in 3D display technology have focused on techniques for converting 2D media into 3D. Depth map is an integral part of 2D-to-3D conversion. Combining multiple depth cues results in a more accurate depth map as it compensates for the errors caused by one depth cue as well as its absence by other depth cues. In this paper, we present a novel framework to generate a more accurate depth map for video using defocus and motion cues. The moving objects present in the scene are the source of errors in both defocus and motion-based depth map estimation. The proposed method rectifies these errors in the depth map by integrating defocus blur and motion cues. In addition, it also corrects the errors in other parts of depth map caused by inaccurate estimation of defocus blur and motion. Since the proposed integration approach relies on the characteristics of point spread functions of defocus and motion blur along with their relations to camera parameters, it is more accurate and reliable.
doi_str_mv 10.1109/TCSVT.2018.2832086
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2220401539</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8353313</ieee_id><sourcerecordid>2220401539</sourcerecordid><originalsourceid>FETCH-LOGICAL-c295t-c826ce297d099668417e75e42d57cf9a5e61f236a9a3593fdd0d7443b729354f3</originalsourceid><addsrcrecordid>eNo9kM1OwzAQhC0EEqXwAnCJxDnBXntj-4jS8iO14kDK1TKJDakgCXF64O1xm4rTrrQzO5qPkGtGM8aoviuL17cyA8pUBooDVfkJmTFElQJQPI07RZYqYHhOLkLYUsqEEnJGYOH68TNZ2z5ZhrH5tmPTtckmNO1HsnC-q3YhsW2drLvDodi5cEnOvP0K7uo452TzsCyLp3T18vhc3K_SCjSOaaUgrxxoWVOt81wJJp1EJ6BGWXlt0eXMA8-tthw193VNaykEf5egOQrP5-R2-tsP3U_MHc222w1tjDQQWwnKkOuogklVDV0Ig_OmH2KN4dcwavZszIGN2bMxRzbRdDOZGufcv0Fx5Jxx_gfc112l</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2220401539</pqid></control><display><type>article</type><title>Depth Map Estimation Using Defocus and Motion Cues</title><source>IEEE Electronic Library (IEL)</source><creator>Kumar, Himanshu ; Yadav, Ajeet Singh ; Gupta, Sumana ; Venkatesh, K. S.</creator><creatorcontrib>Kumar, Himanshu ; Yadav, Ajeet Singh ; Gupta, Sumana ; Venkatesh, K. S.</creatorcontrib><description>Significant recent developments in 3D display technology have focused on techniques for converting 2D media into 3D. Depth map is an integral part of 2D-to-3D conversion. Combining multiple depth cues results in a more accurate depth map as it compensates for the errors caused by one depth cue as well as its absence by other depth cues. In this paper, we present a novel framework to generate a more accurate depth map for video using defocus and motion cues. The moving objects present in the scene are the source of errors in both defocus and motion-based depth map estimation. The proposed method rectifies these errors in the depth map by integrating defocus blur and motion cues. In addition, it also corrects the errors in other parts of depth map caused by inaccurate estimation of defocus blur and motion. Since the proposed integration approach relies on the characteristics of point spread functions of defocus and motion blur along with their relations to camera parameters, it is more accurate and reliable.</description><identifier>ISSN: 1051-8215</identifier><identifier>EISSN: 1558-2205</identifier><identifier>DOI: 10.1109/TCSVT.2018.2832086</identifier><identifier>CODEN: ITCTEM</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>2D-to-3D ; 3DTV ; Cameras ; Cues ; depth for video ; Depth from defocus ; depth from motion ; Display devices ; Drivers licenses ; Estimation ; Image edge detection ; integration of defocus and motion ; monocular depth ; motion and defocus ; Motion segmentation ; Object motion ; Point spread functions ; PSF based integration ; Reliability ; Three-dimensional displays ; Two dimensional displays</subject><ispartof>IEEE transactions on circuits and systems for video technology, 2019-05, Vol.29 (5), p.1365-1379</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c295t-c826ce297d099668417e75e42d57cf9a5e61f236a9a3593fdd0d7443b729354f3</citedby><cites>FETCH-LOGICAL-c295t-c826ce297d099668417e75e42d57cf9a5e61f236a9a3593fdd0d7443b729354f3</cites><orcidid>0000-0002-0597-6411 ; 0000-0003-4032-3659</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8353313$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/8353313$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Kumar, Himanshu</creatorcontrib><creatorcontrib>Yadav, Ajeet Singh</creatorcontrib><creatorcontrib>Gupta, Sumana</creatorcontrib><creatorcontrib>Venkatesh, K. S.</creatorcontrib><title>Depth Map Estimation Using Defocus and Motion Cues</title><title>IEEE transactions on circuits and systems for video technology</title><addtitle>TCSVT</addtitle><description>Significant recent developments in 3D display technology have focused on techniques for converting 2D media into 3D. Depth map is an integral part of 2D-to-3D conversion. Combining multiple depth cues results in a more accurate depth map as it compensates for the errors caused by one depth cue as well as its absence by other depth cues. In this paper, we present a novel framework to generate a more accurate depth map for video using defocus and motion cues. The moving objects present in the scene are the source of errors in both defocus and motion-based depth map estimation. The proposed method rectifies these errors in the depth map by integrating defocus blur and motion cues. In addition, it also corrects the errors in other parts of depth map caused by inaccurate estimation of defocus blur and motion. Since the proposed integration approach relies on the characteristics of point spread functions of defocus and motion blur along with their relations to camera parameters, it is more accurate and reliable.</description><subject>2D-to-3D</subject><subject>3DTV</subject><subject>Cameras</subject><subject>Cues</subject><subject>depth for video</subject><subject>Depth from defocus</subject><subject>depth from motion</subject><subject>Display devices</subject><subject>Drivers licenses</subject><subject>Estimation</subject><subject>Image edge detection</subject><subject>integration of defocus and motion</subject><subject>monocular depth</subject><subject>motion and defocus</subject><subject>Motion segmentation</subject><subject>Object motion</subject><subject>Point spread functions</subject><subject>PSF based integration</subject><subject>Reliability</subject><subject>Three-dimensional displays</subject><subject>Two dimensional displays</subject><issn>1051-8215</issn><issn>1558-2205</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kM1OwzAQhC0EEqXwAnCJxDnBXntj-4jS8iO14kDK1TKJDakgCXF64O1xm4rTrrQzO5qPkGtGM8aoviuL17cyA8pUBooDVfkJmTFElQJQPI07RZYqYHhOLkLYUsqEEnJGYOH68TNZ2z5ZhrH5tmPTtckmNO1HsnC-q3YhsW2drLvDodi5cEnOvP0K7uo452TzsCyLp3T18vhc3K_SCjSOaaUgrxxoWVOt81wJJp1EJ6BGWXlt0eXMA8-tthw193VNaykEf5egOQrP5-R2-tsP3U_MHc222w1tjDQQWwnKkOuogklVDV0Ig_OmH2KN4dcwavZszIGN2bMxRzbRdDOZGufcv0Fx5Jxx_gfc112l</recordid><startdate>20190501</startdate><enddate>20190501</enddate><creator>Kumar, Himanshu</creator><creator>Yadav, Ajeet Singh</creator><creator>Gupta, Sumana</creator><creator>Venkatesh, K. S.</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-0597-6411</orcidid><orcidid>https://orcid.org/0000-0003-4032-3659</orcidid></search><sort><creationdate>20190501</creationdate><title>Depth Map Estimation Using Defocus and Motion Cues</title><author>Kumar, Himanshu ; Yadav, Ajeet Singh ; Gupta, Sumana ; Venkatesh, K. S.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c295t-c826ce297d099668417e75e42d57cf9a5e61f236a9a3593fdd0d7443b729354f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>2D-to-3D</topic><topic>3DTV</topic><topic>Cameras</topic><topic>Cues</topic><topic>depth for video</topic><topic>Depth from defocus</topic><topic>depth from motion</topic><topic>Display devices</topic><topic>Drivers licenses</topic><topic>Estimation</topic><topic>Image edge detection</topic><topic>integration of defocus and motion</topic><topic>monocular depth</topic><topic>motion and defocus</topic><topic>Motion segmentation</topic><topic>Object motion</topic><topic>Point spread functions</topic><topic>PSF based integration</topic><topic>Reliability</topic><topic>Three-dimensional displays</topic><topic>Two dimensional displays</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Kumar, Himanshu</creatorcontrib><creatorcontrib>Yadav, Ajeet Singh</creatorcontrib><creatorcontrib>Gupta, Sumana</creatorcontrib><creatorcontrib>Venkatesh, K. S.</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on circuits and systems for video technology</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Kumar, Himanshu</au><au>Yadav, Ajeet Singh</au><au>Gupta, Sumana</au><au>Venkatesh, K. S.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Depth Map Estimation Using Defocus and Motion Cues</atitle><jtitle>IEEE transactions on circuits and systems for video technology</jtitle><stitle>TCSVT</stitle><date>2019-05-01</date><risdate>2019</risdate><volume>29</volume><issue>5</issue><spage>1365</spage><epage>1379</epage><pages>1365-1379</pages><issn>1051-8215</issn><eissn>1558-2205</eissn><coden>ITCTEM</coden><abstract>Significant recent developments in 3D display technology have focused on techniques for converting 2D media into 3D. Depth map is an integral part of 2D-to-3D conversion. Combining multiple depth cues results in a more accurate depth map as it compensates for the errors caused by one depth cue as well as its absence by other depth cues. In this paper, we present a novel framework to generate a more accurate depth map for video using defocus and motion cues. The moving objects present in the scene are the source of errors in both defocus and motion-based depth map estimation. The proposed method rectifies these errors in the depth map by integrating defocus blur and motion cues. In addition, it also corrects the errors in other parts of depth map caused by inaccurate estimation of defocus blur and motion. Since the proposed integration approach relies on the characteristics of point spread functions of defocus and motion blur along with their relations to camera parameters, it is more accurate and reliable.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TCSVT.2018.2832086</doi><tpages>15</tpages><orcidid>https://orcid.org/0000-0002-0597-6411</orcidid><orcidid>https://orcid.org/0000-0003-4032-3659</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1051-8215
ispartof IEEE transactions on circuits and systems for video technology, 2019-05, Vol.29 (5), p.1365-1379
issn 1051-8215
1558-2205
language eng
recordid cdi_proquest_journals_2220401539
source IEEE Electronic Library (IEL)
subjects 2D-to-3D
3DTV
Cameras
Cues
depth for video
Depth from defocus
depth from motion
Display devices
Drivers licenses
Estimation
Image edge detection
integration of defocus and motion
monocular depth
motion and defocus
Motion segmentation
Object motion
Point spread functions
PSF based integration
Reliability
Three-dimensional displays
Two dimensional displays
title Depth Map Estimation Using Defocus and Motion Cues
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-01T13%3A54%3A37IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Depth%20Map%20Estimation%20Using%20Defocus%20and%20Motion%20Cues&rft.jtitle=IEEE%20transactions%20on%20circuits%20and%20systems%20for%20video%20technology&rft.au=Kumar,%20Himanshu&rft.date=2019-05-01&rft.volume=29&rft.issue=5&rft.spage=1365&rft.epage=1379&rft.pages=1365-1379&rft.issn=1051-8215&rft.eissn=1558-2205&rft.coden=ITCTEM&rft_id=info:doi/10.1109/TCSVT.2018.2832086&rft_dat=%3Cproquest_RIE%3E2220401539%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2220401539&rft_id=info:pmid/&rft_ieee_id=8353313&rfr_iscdi=true