Coded Aperture Pairs for Depth from Defocus and Defocus Deblurring

The classical approach to depth from defocus (DFD) uses lenses with circular apertures for image capturing. We show in this paper that the use of a circular aperture severely restricts the accuracy of DFD. We derive a criterion for evaluating a pair of apertures with respect to the precision of dept...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:International journal of computer vision 2011-05, Vol.93 (1), p.53-72
Hauptverfasser: Zhou, Changyin, Lin, Stephen, Nayar, Shree K.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 72
container_issue 1
container_start_page 53
container_title International journal of computer vision
container_volume 93
creator Zhou, Changyin
Lin, Stephen
Nayar, Shree K.
description The classical approach to depth from defocus (DFD) uses lenses with circular apertures for image capturing. We show in this paper that the use of a circular aperture severely restricts the accuracy of DFD. We derive a criterion for evaluating a pair of apertures with respect to the precision of depth recovery. This criterion is optimized using a genetic algorithm and gradient descent search to arrive at a pair of high resolution apertures. These two coded apertures are found to complement each other in the scene frequencies they preserve. This property enables them to not only recover depth with greater fidelity but also obtain a high quality all-focused image from the two captured images. Extensive simulations as well as experiments on a variety of real scenes demonstrate the benefits of using the coded apertures over conventional circular apertures.
doi_str_mv 10.1007/s11263-010-0409-8
format Article
fullrecord <record><control><sourceid>gale_proqu</sourceid><recordid>TN_cdi_proquest_miscellaneous_907947436</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A364853987</galeid><sourcerecordid>A364853987</sourcerecordid><originalsourceid>FETCH-LOGICAL-c560t-e58c21b3d62b2a081aa23d8a8bed9a856e73d29d0ebe8048c717ab0d100c4b783</originalsourceid><addsrcrecordid>eNp1kVtr3DAQhUVpodttf0DfDKWUPjgdXWzJj9tNL4FAQi_PYiyNtw5ey5VsaP99tTiEJhAE0iC-M8yZw9hrDmccQH9InItalsChBAVNaZ6wDa-0LLmC6inbQCOgrOqGP2cvUroBAGGE3LCP--DJF7uJ4rxEKq6xj6noQizOaZp_FV0Mx1x2wS2pwNHf1efUDkuM_Xh4yZ51OCR6dftu2c_Pn37sv5aXV18u9rvL0lU1zCVVxgneSl-LViAYjiikN2ha8g2aqiYtvWg8UEsGlHGaa2zBZ3tOtdrILXu39p1i-L1Qmu2xT46GAUcKS7IN6EZpJetMvnlA3oQljnk4y_OeQNY6X1t2tlIHHMj2YxfmiC4fT8fehZG6Pv_vZK1MJRujs-D9PUFmZvozH3BJyV58_3af5SvrYkgpUmen2B8x_rUc7CkxuyZmc2L2lJg9GXx7OzYmh0MXcXR9uhMKBUIroTInVi5Np_1T_M_eo83_AbFiokw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1112036720</pqid></control><display><type>article</type><title>Coded Aperture Pairs for Depth from Defocus and Defocus Deblurring</title><source>SpringerLink Journals</source><creator>Zhou, Changyin ; Lin, Stephen ; Nayar, Shree K.</creator><creatorcontrib>Zhou, Changyin ; Lin, Stephen ; Nayar, Shree K.</creatorcontrib><description>The classical approach to depth from defocus (DFD) uses lenses with circular apertures for image capturing. We show in this paper that the use of a circular aperture severely restricts the accuracy of DFD. We derive a criterion for evaluating a pair of apertures with respect to the precision of depth recovery. This criterion is optimized using a genetic algorithm and gradient descent search to arrive at a pair of high resolution apertures. These two coded apertures are found to complement each other in the scene frequencies they preserve. This property enables them to not only recover depth with greater fidelity but also obtain a high quality all-focused image from the two captured images. Extensive simulations as well as experiments on a variety of real scenes demonstrate the benefits of using the coded apertures over conventional circular apertures.</description><identifier>ISSN: 0920-5691</identifier><identifier>EISSN: 1573-1405</identifier><identifier>DOI: 10.1007/s11263-010-0409-8</identifier><language>eng</language><publisher>Boston: Springer US</publisher><subject>Algorithms ; Aperture ; Apertures ; Applied sciences ; Artificial Intelligence ; Cameras ; Complement ; Computer Imaging ; Computer Science ; Computer science; control theory; systems ; Computer simulation ; Computer vision ; Criteria ; Exact sciences and technology ; Genetic algorithms ; Image Processing and Computer Vision ; Noise ; Optimization ; Pattern Recognition ; Pattern Recognition and Graphics ; Pattern recognition. Digital image processing. Computational geometry ; Preserves ; Searching ; Simulation ; Software ; Vision</subject><ispartof>International journal of computer vision, 2011-05, Vol.93 (1), p.53-72</ispartof><rights>Springer Science+Business Media, LLC 2010</rights><rights>2015 INIST-CNRS</rights><rights>COPYRIGHT 2011 Springer</rights><rights>Springer Science+Business Media, LLC 2011</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c560t-e58c21b3d62b2a081aa23d8a8bed9a856e73d29d0ebe8048c717ab0d100c4b783</citedby><cites>FETCH-LOGICAL-c560t-e58c21b3d62b2a081aa23d8a8bed9a856e73d29d0ebe8048c717ab0d100c4b783</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11263-010-0409-8$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11263-010-0409-8$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,777,781,27905,27906,41469,42538,51300</link.rule.ids><backlink>$$Uhttp://pascal-francis.inist.fr/vibad/index.php?action=getRecordDetail&amp;idt=24027424$$DView record in Pascal Francis$$Hfree_for_read</backlink></links><search><creatorcontrib>Zhou, Changyin</creatorcontrib><creatorcontrib>Lin, Stephen</creatorcontrib><creatorcontrib>Nayar, Shree K.</creatorcontrib><title>Coded Aperture Pairs for Depth from Defocus and Defocus Deblurring</title><title>International journal of computer vision</title><addtitle>Int J Comput Vis</addtitle><description>The classical approach to depth from defocus (DFD) uses lenses with circular apertures for image capturing. We show in this paper that the use of a circular aperture severely restricts the accuracy of DFD. We derive a criterion for evaluating a pair of apertures with respect to the precision of depth recovery. This criterion is optimized using a genetic algorithm and gradient descent search to arrive at a pair of high resolution apertures. These two coded apertures are found to complement each other in the scene frequencies they preserve. This property enables them to not only recover depth with greater fidelity but also obtain a high quality all-focused image from the two captured images. Extensive simulations as well as experiments on a variety of real scenes demonstrate the benefits of using the coded apertures over conventional circular apertures.</description><subject>Algorithms</subject><subject>Aperture</subject><subject>Apertures</subject><subject>Applied sciences</subject><subject>Artificial Intelligence</subject><subject>Cameras</subject><subject>Complement</subject><subject>Computer Imaging</subject><subject>Computer Science</subject><subject>Computer science; control theory; systems</subject><subject>Computer simulation</subject><subject>Computer vision</subject><subject>Criteria</subject><subject>Exact sciences and technology</subject><subject>Genetic algorithms</subject><subject>Image Processing and Computer Vision</subject><subject>Noise</subject><subject>Optimization</subject><subject>Pattern Recognition</subject><subject>Pattern Recognition and Graphics</subject><subject>Pattern recognition. Digital image processing. Computational geometry</subject><subject>Preserves</subject><subject>Searching</subject><subject>Simulation</subject><subject>Software</subject><subject>Vision</subject><issn>0920-5691</issn><issn>1573-1405</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2011</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><recordid>eNp1kVtr3DAQhUVpodttf0DfDKWUPjgdXWzJj9tNL4FAQi_PYiyNtw5ey5VsaP99tTiEJhAE0iC-M8yZw9hrDmccQH9InItalsChBAVNaZ6wDa-0LLmC6inbQCOgrOqGP2cvUroBAGGE3LCP--DJF7uJ4rxEKq6xj6noQizOaZp_FV0Mx1x2wS2pwNHf1efUDkuM_Xh4yZ51OCR6dftu2c_Pn37sv5aXV18u9rvL0lU1zCVVxgneSl-LViAYjiikN2ha8g2aqiYtvWg8UEsGlHGaa2zBZ3tOtdrILXu39p1i-L1Qmu2xT46GAUcKS7IN6EZpJetMvnlA3oQljnk4y_OeQNY6X1t2tlIHHMj2YxfmiC4fT8fehZG6Pv_vZK1MJRujs-D9PUFmZvozH3BJyV58_3af5SvrYkgpUmen2B8x_rUc7CkxuyZmc2L2lJg9GXx7OzYmh0MXcXR9uhMKBUIroTInVi5Np_1T_M_eo83_AbFiokw</recordid><startdate>20110501</startdate><enddate>20110501</enddate><creator>Zhou, Changyin</creator><creator>Lin, Stephen</creator><creator>Nayar, Shree K.</creator><general>Springer US</general><general>Springer</general><general>Springer Nature B.V</general><scope>IQODW</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>ISR</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PYYUZ</scope><scope>Q9U</scope></search><sort><creationdate>20110501</creationdate><title>Coded Aperture Pairs for Depth from Defocus and Defocus Deblurring</title><author>Zhou, Changyin ; Lin, Stephen ; Nayar, Shree K.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c560t-e58c21b3d62b2a081aa23d8a8bed9a856e73d29d0ebe8048c717ab0d100c4b783</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2011</creationdate><topic>Algorithms</topic><topic>Aperture</topic><topic>Apertures</topic><topic>Applied sciences</topic><topic>Artificial Intelligence</topic><topic>Cameras</topic><topic>Complement</topic><topic>Computer Imaging</topic><topic>Computer Science</topic><topic>Computer science; control theory; systems</topic><topic>Computer simulation</topic><topic>Computer vision</topic><topic>Criteria</topic><topic>Exact sciences and technology</topic><topic>Genetic algorithms</topic><topic>Image Processing and Computer Vision</topic><topic>Noise</topic><topic>Optimization</topic><topic>Pattern Recognition</topic><topic>Pattern Recognition and Graphics</topic><topic>Pattern recognition. Digital image processing. Computational geometry</topic><topic>Preserves</topic><topic>Searching</topic><topic>Simulation</topic><topic>Software</topic><topic>Vision</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhou, Changyin</creatorcontrib><creatorcontrib>Lin, Stephen</creatorcontrib><creatorcontrib>Nayar, Shree K.</creatorcontrib><collection>Pascal-Francis</collection><collection>CrossRef</collection><collection>Gale In Context: Science</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ABI/INFORM Collection China</collection><collection>ProQuest Central Basic</collection><jtitle>International journal of computer vision</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zhou, Changyin</au><au>Lin, Stephen</au><au>Nayar, Shree K.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Coded Aperture Pairs for Depth from Defocus and Defocus Deblurring</atitle><jtitle>International journal of computer vision</jtitle><stitle>Int J Comput Vis</stitle><date>2011-05-01</date><risdate>2011</risdate><volume>93</volume><issue>1</issue><spage>53</spage><epage>72</epage><pages>53-72</pages><issn>0920-5691</issn><eissn>1573-1405</eissn><abstract>The classical approach to depth from defocus (DFD) uses lenses with circular apertures for image capturing. We show in this paper that the use of a circular aperture severely restricts the accuracy of DFD. We derive a criterion for evaluating a pair of apertures with respect to the precision of depth recovery. This criterion is optimized using a genetic algorithm and gradient descent search to arrive at a pair of high resolution apertures. These two coded apertures are found to complement each other in the scene frequencies they preserve. This property enables them to not only recover depth with greater fidelity but also obtain a high quality all-focused image from the two captured images. Extensive simulations as well as experiments on a variety of real scenes demonstrate the benefits of using the coded apertures over conventional circular apertures.</abstract><cop>Boston</cop><pub>Springer US</pub><doi>10.1007/s11263-010-0409-8</doi><tpages>20</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0920-5691
ispartof International journal of computer vision, 2011-05, Vol.93 (1), p.53-72
issn 0920-5691
1573-1405
language eng
recordid cdi_proquest_miscellaneous_907947436
source SpringerLink Journals
subjects Algorithms
Aperture
Apertures
Applied sciences
Artificial Intelligence
Cameras
Complement
Computer Imaging
Computer Science
Computer science
control theory
systems
Computer simulation
Computer vision
Criteria
Exact sciences and technology
Genetic algorithms
Image Processing and Computer Vision
Noise
Optimization
Pattern Recognition
Pattern Recognition and Graphics
Pattern recognition. Digital image processing. Computational geometry
Preserves
Searching
Simulation
Software
Vision
title Coded Aperture Pairs for Depth from Defocus and Defocus Deblurring
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-18T07%3A10%3A54IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_proqu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Coded%20Aperture%20Pairs%20for%20Depth%20from%20Defocus%20and%20Defocus%20Deblurring&rft.jtitle=International%20journal%20of%20computer%20vision&rft.au=Zhou,%20Changyin&rft.date=2011-05-01&rft.volume=93&rft.issue=1&rft.spage=53&rft.epage=72&rft.pages=53-72&rft.issn=0920-5691&rft.eissn=1573-1405&rft_id=info:doi/10.1007/s11263-010-0409-8&rft_dat=%3Cgale_proqu%3EA364853987%3C/gale_proqu%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1112036720&rft_id=info:pmid/&rft_galeid=A364853987&rfr_iscdi=true