Learning Depth from Focus in the Wild
For better photography, most recent commercial cameras including smartphones have either adopted large-aperture lens to collect more light or used a burst mode to take multiple images within short times. These interesting features lead us to examine depth from focus/defocus. In this work, we present...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Won, Changyeon Jeon, Hae-Gon |
description | For better photography, most recent commercial cameras including smartphones
have either adopted large-aperture lens to collect more light or used a burst
mode to take multiple images within short times. These interesting features
lead us to examine depth from focus/defocus.
In this work, we present a convolutional neural network-based depth
estimation from single focal stacks. Our method differs from relevant
state-of-the-art works with three unique features. First, our method allows
depth maps to be inferred in an end-to-end manner even with image alignment.
Second, we propose a sharp region detection module to reduce blur ambiguities
in subtle focus changes and weakly texture-less regions. Third, we design an
effective downsampling module to ease flows of focal information in feature
extractions. In addition, for the generalization of the proposed network, we
develop a simulator to realistically reproduce the features of commercial
cameras, such as changes in field of view, focal length and principal points.
By effectively incorporating these three unique features, our network
achieves the top rank in the DDFF 12-Scene benchmark on most metrics. We also
demonstrate the effectiveness of the proposed method on various quantitative
evaluations and real-world images taken from various off-the-shelf cameras
compared with state-of-the-art methods. Our source code is publicly available
at https://github.com/wcy199705/DfFintheWild. |
doi_str_mv | 10.48550/arxiv.2207.09658 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2207_09658</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2207_09658</sourcerecordid><originalsourceid>FETCH-LOGICAL-a678-c8f3c23ec7da0586f4abd00d5f00b3be3f0e62f0ab52c1cee535fd2bcd0ae5b23</originalsourceid><addsrcrecordid>eNotzrFuwjAQgGEvDBX0ATrhhTHhsLnEjFVKSqVIXZA6Rmf7DJYgICeg8vao0Onffn1CvC0gXxpEmFP6jddcKShzWBVoXsSsYUpd7Hbyg8_DXoZ0Osr65C69jJ0c9ix_4sFPxCjQoefX_47Ftl5vq03WfH9-Ve9NRkVpMmeCdkqzKz0BmiIsyXoAjwHAass6ABcqAFlUbuGYUWPwyjoPxGiVHovpc_twtucUj5Ru7Z-3fXj1HfOvOoQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Learning Depth from Focus in the Wild</title><source>arXiv.org</source><creator>Won, Changyeon ; Jeon, Hae-Gon</creator><creatorcontrib>Won, Changyeon ; Jeon, Hae-Gon</creatorcontrib><description>For better photography, most recent commercial cameras including smartphones
have either adopted large-aperture lens to collect more light or used a burst
mode to take multiple images within short times. These interesting features
lead us to examine depth from focus/defocus.
In this work, we present a convolutional neural network-based depth
estimation from single focal stacks. Our method differs from relevant
state-of-the-art works with three unique features. First, our method allows
depth maps to be inferred in an end-to-end manner even with image alignment.
Second, we propose a sharp region detection module to reduce blur ambiguities
in subtle focus changes and weakly texture-less regions. Third, we design an
effective downsampling module to ease flows of focal information in feature
extractions. In addition, for the generalization of the proposed network, we
develop a simulator to realistically reproduce the features of commercial
cameras, such as changes in field of view, focal length and principal points.
By effectively incorporating these three unique features, our network
achieves the top rank in the DDFF 12-Scene benchmark on most metrics. We also
demonstrate the effectiveness of the proposed method on various quantitative
evaluations and real-world images taken from various off-the-shelf cameras
compared with state-of-the-art methods. Our source code is publicly available
at https://github.com/wcy199705/DfFintheWild.</description><identifier>DOI: 10.48550/arxiv.2207.09658</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2022-07</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2207.09658$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2207.09658$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Won, Changyeon</creatorcontrib><creatorcontrib>Jeon, Hae-Gon</creatorcontrib><title>Learning Depth from Focus in the Wild</title><description>For better photography, most recent commercial cameras including smartphones
have either adopted large-aperture lens to collect more light or used a burst
mode to take multiple images within short times. These interesting features
lead us to examine depth from focus/defocus.
In this work, we present a convolutional neural network-based depth
estimation from single focal stacks. Our method differs from relevant
state-of-the-art works with three unique features. First, our method allows
depth maps to be inferred in an end-to-end manner even with image alignment.
Second, we propose a sharp region detection module to reduce blur ambiguities
in subtle focus changes and weakly texture-less regions. Third, we design an
effective downsampling module to ease flows of focal information in feature
extractions. In addition, for the generalization of the proposed network, we
develop a simulator to realistically reproduce the features of commercial
cameras, such as changes in field of view, focal length and principal points.
By effectively incorporating these three unique features, our network
achieves the top rank in the DDFF 12-Scene benchmark on most metrics. We also
demonstrate the effectiveness of the proposed method on various quantitative
evaluations and real-world images taken from various off-the-shelf cameras
compared with state-of-the-art methods. Our source code is publicly available
at https://github.com/wcy199705/DfFintheWild.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzrFuwjAQgGEvDBX0ATrhhTHhsLnEjFVKSqVIXZA6Rmf7DJYgICeg8vao0Onffn1CvC0gXxpEmFP6jddcKShzWBVoXsSsYUpd7Hbyg8_DXoZ0Osr65C69jJ0c9ix_4sFPxCjQoefX_47Ftl5vq03WfH9-Ve9NRkVpMmeCdkqzKz0BmiIsyXoAjwHAass6ABcqAFlUbuGYUWPwyjoPxGiVHovpc_twtucUj5Ru7Z-3fXj1HfOvOoQ</recordid><startdate>20220720</startdate><enddate>20220720</enddate><creator>Won, Changyeon</creator><creator>Jeon, Hae-Gon</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220720</creationdate><title>Learning Depth from Focus in the Wild</title><author>Won, Changyeon ; Jeon, Hae-Gon</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a678-c8f3c23ec7da0586f4abd00d5f00b3be3f0e62f0ab52c1cee535fd2bcd0ae5b23</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Won, Changyeon</creatorcontrib><creatorcontrib>Jeon, Hae-Gon</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Won, Changyeon</au><au>Jeon, Hae-Gon</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Learning Depth from Focus in the Wild</atitle><date>2022-07-20</date><risdate>2022</risdate><abstract>For better photography, most recent commercial cameras including smartphones
have either adopted large-aperture lens to collect more light or used a burst
mode to take multiple images within short times. These interesting features
lead us to examine depth from focus/defocus.
In this work, we present a convolutional neural network-based depth
estimation from single focal stacks. Our method differs from relevant
state-of-the-art works with three unique features. First, our method allows
depth maps to be inferred in an end-to-end manner even with image alignment.
Second, we propose a sharp region detection module to reduce blur ambiguities
in subtle focus changes and weakly texture-less regions. Third, we design an
effective downsampling module to ease flows of focal information in feature
extractions. In addition, for the generalization of the proposed network, we
develop a simulator to realistically reproduce the features of commercial
cameras, such as changes in field of view, focal length and principal points.
By effectively incorporating these three unique features, our network
achieves the top rank in the DDFF 12-Scene benchmark on most metrics. We also
demonstrate the effectiveness of the proposed method on various quantitative
evaluations and real-world images taken from various off-the-shelf cameras
compared with state-of-the-art methods. Our source code is publicly available
at https://github.com/wcy199705/DfFintheWild.</abstract><doi>10.48550/arxiv.2207.09658</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2207.09658 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2207_09658 |
source | arXiv.org |
subjects | Computer Science - Computer Vision and Pattern Recognition |
title | Learning Depth from Focus in the Wild |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-18T16%3A10%3A46IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Learning%20Depth%20from%20Focus%20in%20the%20Wild&rft.au=Won,%20Changyeon&rft.date=2022-07-20&rft_id=info:doi/10.48550/arxiv.2207.09658&rft_dat=%3Carxiv_GOX%3E2207_09658%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |