Multi-channel residual network model for accurate estimation of spatially-varying and depth-dependent defocus kernels

Digital projectors have been increasingly utilized in various commercial and scientific applications. However, they are prone to the out-of-focus blurring problem since their depth-of-fields are typically limited. In this paper, we explore the feasibility of utilizing a deep learning-based approach...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Optics express 2020-01, Vol.28 (2), p.2263-2275
Hauptverfasser: Cao, Yanpeng, Ye, Zhangyu, He, Zewei, Yang, Jiangxin, Cao, Yanlong, Tisse, Christel-Loic, Yang, Michael Ying
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 2275
container_issue 2
container_start_page 2263
container_title Optics express
container_volume 28
creator Cao, Yanpeng
Ye, Zhangyu
He, Zewei
Yang, Jiangxin
Cao, Yanlong
Tisse, Christel-Loic
Yang, Michael Ying
description Digital projectors have been increasingly utilized in various commercial and scientific applications. However, they are prone to the out-of-focus blurring problem since their depth-of-fields are typically limited. In this paper, we explore the feasibility of utilizing a deep learning-based approach to analyze the spatially-varying and depth-dependent defocus properties of digital projectors. A multimodal displaying/imaging system is built for capturing images projected at various depths. Based on the constructed dataset containing well-aligned in-focus, out-of-focus, and depth images, we propose a novel multi-channel residual deep network model to learn the end-to-end mapping function between the in-focus and out-of-focus image patches captured at different spatial locations and depths. To the best of our knowledge, it is the first research work revealing that the complex spatially-varying and depth-dependent blurring effects can be accurately learned from a number of real-captured image pairs instead of being hand-crafted as before. Experimental results demonstrate that our proposed deep learning-based method significantly outperforms the state-of-the-art defocus kernel estimation techniques and thus leads to better out-of-focus compensation for extending the dynamic ranges of digital projectors.
doi_str_mv 10.1364/OE.383127
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_2370532895</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2370532895</sourcerecordid><originalsourceid>FETCH-LOGICAL-c320t-c404f05e845d1ee2aeb7c022b939a842b99c0a89727e6afcda5c673adfb420a3</originalsourceid><addsrcrecordid>eNpNUMtOwzAQtBCIlsKBH0A-wiHFr9TJEVXlIRX10nu0tTc0NLWLnYD69xi1IC47s6vR7O4Qcs3ZmMuJul_MxrKQXOgTMuSsVJlihT79xwfkIsZ3xrjSpT4nAym44KVgQ9K_9m3XZGYNzmFLA8bG9tBSh92XDxu69TaNax8oGNMH6JBi7JotdI131Nc07hKFtt1nnxD2jXuj4Cy1uOvWWaroLLou9bU3faQbDGlNvCRnNbQRr444IsvH2XL6nM0XTy_Th3lmpGBdZhRTNcuxULnliAJwpQ0TYlXKEgqVsDQMilILjROojYXcTLQEW6-UYCBH5PZguwv-o093V9smGmxbcOj7WAmpWS5FUeZJeneQmuBjDFhXu5C-DPuKs-on5Goxqw4hJ-3N0bZfbdH-KX9Tld-by3mo</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2370532895</pqid></control><display><type>article</type><title>Multi-channel residual network model for accurate estimation of spatially-varying and depth-dependent defocus kernels</title><source>DOAJ Directory of Open Access Journals</source><source>Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals</source><source>Alma/SFX Local Collection</source><creator>Cao, Yanpeng ; Ye, Zhangyu ; He, Zewei ; Yang, Jiangxin ; Cao, Yanlong ; Tisse, Christel-Loic ; Yang, Michael Ying</creator><creatorcontrib>Cao, Yanpeng ; Ye, Zhangyu ; He, Zewei ; Yang, Jiangxin ; Cao, Yanlong ; Tisse, Christel-Loic ; Yang, Michael Ying</creatorcontrib><description>Digital projectors have been increasingly utilized in various commercial and scientific applications. However, they are prone to the out-of-focus blurring problem since their depth-of-fields are typically limited. In this paper, we explore the feasibility of utilizing a deep learning-based approach to analyze the spatially-varying and depth-dependent defocus properties of digital projectors. A multimodal displaying/imaging system is built for capturing images projected at various depths. Based on the constructed dataset containing well-aligned in-focus, out-of-focus, and depth images, we propose a novel multi-channel residual deep network model to learn the end-to-end mapping function between the in-focus and out-of-focus image patches captured at different spatial locations and depths. To the best of our knowledge, it is the first research work revealing that the complex spatially-varying and depth-dependent blurring effects can be accurately learned from a number of real-captured image pairs instead of being hand-crafted as before. Experimental results demonstrate that our proposed deep learning-based method significantly outperforms the state-of-the-art defocus kernel estimation techniques and thus leads to better out-of-focus compensation for extending the dynamic ranges of digital projectors.</description><identifier>ISSN: 1094-4087</identifier><identifier>EISSN: 1094-4087</identifier><identifier>DOI: 10.1364/OE.383127</identifier><identifier>PMID: 32121920</identifier><language>eng</language><publisher>United States</publisher><ispartof>Optics express, 2020-01, Vol.28 (2), p.2263-2275</ispartof><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c320t-c404f05e845d1ee2aeb7c022b939a842b99c0a89727e6afcda5c673adfb420a3</citedby><cites>FETCH-LOGICAL-c320t-c404f05e845d1ee2aeb7c022b939a842b99c0a89727e6afcda5c673adfb420a3</cites><orcidid>0000-0002-0649-9987</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,864,27924,27925</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/32121920$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Cao, Yanpeng</creatorcontrib><creatorcontrib>Ye, Zhangyu</creatorcontrib><creatorcontrib>He, Zewei</creatorcontrib><creatorcontrib>Yang, Jiangxin</creatorcontrib><creatorcontrib>Cao, Yanlong</creatorcontrib><creatorcontrib>Tisse, Christel-Loic</creatorcontrib><creatorcontrib>Yang, Michael Ying</creatorcontrib><title>Multi-channel residual network model for accurate estimation of spatially-varying and depth-dependent defocus kernels</title><title>Optics express</title><addtitle>Opt Express</addtitle><description>Digital projectors have been increasingly utilized in various commercial and scientific applications. However, they are prone to the out-of-focus blurring problem since their depth-of-fields are typically limited. In this paper, we explore the feasibility of utilizing a deep learning-based approach to analyze the spatially-varying and depth-dependent defocus properties of digital projectors. A multimodal displaying/imaging system is built for capturing images projected at various depths. Based on the constructed dataset containing well-aligned in-focus, out-of-focus, and depth images, we propose a novel multi-channel residual deep network model to learn the end-to-end mapping function between the in-focus and out-of-focus image patches captured at different spatial locations and depths. To the best of our knowledge, it is the first research work revealing that the complex spatially-varying and depth-dependent blurring effects can be accurately learned from a number of real-captured image pairs instead of being hand-crafted as before. Experimental results demonstrate that our proposed deep learning-based method significantly outperforms the state-of-the-art defocus kernel estimation techniques and thus leads to better out-of-focus compensation for extending the dynamic ranges of digital projectors.</description><issn>1094-4087</issn><issn>1094-4087</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><recordid>eNpNUMtOwzAQtBCIlsKBH0A-wiHFr9TJEVXlIRX10nu0tTc0NLWLnYD69xi1IC47s6vR7O4Qcs3ZmMuJul_MxrKQXOgTMuSsVJlihT79xwfkIsZ3xrjSpT4nAym44KVgQ9K_9m3XZGYNzmFLA8bG9tBSh92XDxu69TaNax8oGNMH6JBi7JotdI131Nc07hKFtt1nnxD2jXuj4Cy1uOvWWaroLLou9bU3faQbDGlNvCRnNbQRr444IsvH2XL6nM0XTy_Th3lmpGBdZhRTNcuxULnliAJwpQ0TYlXKEgqVsDQMilILjROojYXcTLQEW6-UYCBH5PZguwv-o093V9smGmxbcOj7WAmpWS5FUeZJeneQmuBjDFhXu5C-DPuKs-on5Goxqw4hJ-3N0bZfbdH-KX9Tld-by3mo</recordid><startdate>20200120</startdate><enddate>20200120</enddate><creator>Cao, Yanpeng</creator><creator>Ye, Zhangyu</creator><creator>He, Zewei</creator><creator>Yang, Jiangxin</creator><creator>Cao, Yanlong</creator><creator>Tisse, Christel-Loic</creator><creator>Yang, Michael Ying</creator><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-0649-9987</orcidid></search><sort><creationdate>20200120</creationdate><title>Multi-channel residual network model for accurate estimation of spatially-varying and depth-dependent defocus kernels</title><author>Cao, Yanpeng ; Ye, Zhangyu ; He, Zewei ; Yang, Jiangxin ; Cao, Yanlong ; Tisse, Christel-Loic ; Yang, Michael Ying</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c320t-c404f05e845d1ee2aeb7c022b939a842b99c0a89727e6afcda5c673adfb420a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Cao, Yanpeng</creatorcontrib><creatorcontrib>Ye, Zhangyu</creatorcontrib><creatorcontrib>He, Zewei</creatorcontrib><creatorcontrib>Yang, Jiangxin</creatorcontrib><creatorcontrib>Cao, Yanlong</creatorcontrib><creatorcontrib>Tisse, Christel-Loic</creatorcontrib><creatorcontrib>Yang, Michael Ying</creatorcontrib><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>Optics express</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Cao, Yanpeng</au><au>Ye, Zhangyu</au><au>He, Zewei</au><au>Yang, Jiangxin</au><au>Cao, Yanlong</au><au>Tisse, Christel-Loic</au><au>Yang, Michael Ying</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Multi-channel residual network model for accurate estimation of spatially-varying and depth-dependent defocus kernels</atitle><jtitle>Optics express</jtitle><addtitle>Opt Express</addtitle><date>2020-01-20</date><risdate>2020</risdate><volume>28</volume><issue>2</issue><spage>2263</spage><epage>2275</epage><pages>2263-2275</pages><issn>1094-4087</issn><eissn>1094-4087</eissn><abstract>Digital projectors have been increasingly utilized in various commercial and scientific applications. However, they are prone to the out-of-focus blurring problem since their depth-of-fields are typically limited. In this paper, we explore the feasibility of utilizing a deep learning-based approach to analyze the spatially-varying and depth-dependent defocus properties of digital projectors. A multimodal displaying/imaging system is built for capturing images projected at various depths. Based on the constructed dataset containing well-aligned in-focus, out-of-focus, and depth images, we propose a novel multi-channel residual deep network model to learn the end-to-end mapping function between the in-focus and out-of-focus image patches captured at different spatial locations and depths. To the best of our knowledge, it is the first research work revealing that the complex spatially-varying and depth-dependent blurring effects can be accurately learned from a number of real-captured image pairs instead of being hand-crafted as before. Experimental results demonstrate that our proposed deep learning-based method significantly outperforms the state-of-the-art defocus kernel estimation techniques and thus leads to better out-of-focus compensation for extending the dynamic ranges of digital projectors.</abstract><cop>United States</cop><pmid>32121920</pmid><doi>10.1364/OE.383127</doi><tpages>13</tpages><orcidid>https://orcid.org/0000-0002-0649-9987</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1094-4087
ispartof Optics express, 2020-01, Vol.28 (2), p.2263-2275
issn 1094-4087
1094-4087
language eng
recordid cdi_proquest_miscellaneous_2370532895
source DOAJ Directory of Open Access Journals; Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals; Alma/SFX Local Collection
title Multi-channel residual network model for accurate estimation of spatially-varying and depth-dependent defocus kernels
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-21T07%3A21%3A23IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Multi-channel%20residual%20network%20model%20for%20accurate%20estimation%20of%20spatially-varying%20and%20depth-dependent%20defocus%20kernels&rft.jtitle=Optics%20express&rft.au=Cao,%20Yanpeng&rft.date=2020-01-20&rft.volume=28&rft.issue=2&rft.spage=2263&rft.epage=2275&rft.pages=2263-2275&rft.issn=1094-4087&rft.eissn=1094-4087&rft_id=info:doi/10.1364/OE.383127&rft_dat=%3Cproquest_cross%3E2370532895%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2370532895&rft_id=info:pmid/32121920&rfr_iscdi=true