Single-Shot Metric Depth from Focused Plenoptic Cameras
Metric depth estimation from visual sensors is crucial for robots to perceive, navigate, and interact with their environment. Traditional range imaging setups, such as stereo or structured light cameras, face hassles including calibration, occlusions, and hardware demands, with accuracy limited by t...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2024-12 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Lasheras-Hernandez, Blanca Strobl, Klaus H Izquierdo, Sergio Bodenmüller, Tim Triebel, Rudolph Civera, Javier |
description | Metric depth estimation from visual sensors is crucial for robots to perceive, navigate, and interact with their environment. Traditional range imaging setups, such as stereo or structured light cameras, face hassles including calibration, occlusions, and hardware demands, with accuracy limited by the baseline between cameras. Single- and multi-view monocular depth offers a more compact alternative, but is constrained by the unobservability of the metric scale. Light field imaging provides a promising solution for estimating metric depth by using a unique lens configuration through a single device. However, its application to single-view dense metric depth is under-addressed mainly due to the technology's high cost, the lack of public benchmarks, and proprietary geometrical models and software. Our work explores the potential of focused plenoptic cameras for dense metric depth. We propose a novel pipeline that predicts metric depth from a single plenoptic camera shot by first generating a sparse metric point cloud using machine learning, which is then used to scale and align a dense relative depth map regressed by a foundation depth model, resulting in dense metric depth. To validate it, we curated the Light Field & Stereo Image Dataset (LFS) of real-world light field images with stereo depth labels, filling a current gap in existing resources. Experimental results show that our pipeline produces accurate metric depth predictions, laying a solid groundwork for future research in this field. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3140662047</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3140662047</sourcerecordid><originalsourceid>FETCH-proquest_journals_31406620473</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mQwD87MS89J1Q3OyC9R8E0tKcpMVnBJLSjJUEgrys9VcMtPLi1OTVEIyEnNyy8oAUo6J-amFiUW8zCwpiXmFKfyQmluBmU31xBnD92CovzC0tTikvis_NKiPKBUvLGhiYGZmZGBibkxcaoAyoc1Kw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3140662047</pqid></control><display><type>article</type><title>Single-Shot Metric Depth from Focused Plenoptic Cameras</title><source>Free E- Journals</source><creator>Lasheras-Hernandez, Blanca ; Strobl, Klaus H ; Izquierdo, Sergio ; Bodenmüller, Tim ; Triebel, Rudolph ; Civera, Javier</creator><creatorcontrib>Lasheras-Hernandez, Blanca ; Strobl, Klaus H ; Izquierdo, Sergio ; Bodenmüller, Tim ; Triebel, Rudolph ; Civera, Javier</creatorcontrib><description>Metric depth estimation from visual sensors is crucial for robots to perceive, navigate, and interact with their environment. Traditional range imaging setups, such as stereo or structured light cameras, face hassles including calibration, occlusions, and hardware demands, with accuracy limited by the baseline between cameras. Single- and multi-view monocular depth offers a more compact alternative, but is constrained by the unobservability of the metric scale. Light field imaging provides a promising solution for estimating metric depth by using a unique lens configuration through a single device. However, its application to single-view dense metric depth is under-addressed mainly due to the technology's high cost, the lack of public benchmarks, and proprietary geometrical models and software. Our work explores the potential of focused plenoptic cameras for dense metric depth. We propose a novel pipeline that predicts metric depth from a single plenoptic camera shot by first generating a sparse metric point cloud using machine learning, which is then used to scale and align a dense relative depth map regressed by a foundation depth model, resulting in dense metric depth. To validate it, we curated the Light Field & Stereo Image Dataset (LFS) of real-world light field images with stereo depth labels, filling a current gap in existing resources. Experimental results show that our pipeline produces accurate metric depth predictions, laying a solid groundwork for future research in this field.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Cameras ; Estimation ; Machine learning</subject><ispartof>arXiv.org, 2024-12</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Lasheras-Hernandez, Blanca</creatorcontrib><creatorcontrib>Strobl, Klaus H</creatorcontrib><creatorcontrib>Izquierdo, Sergio</creatorcontrib><creatorcontrib>Bodenmüller, Tim</creatorcontrib><creatorcontrib>Triebel, Rudolph</creatorcontrib><creatorcontrib>Civera, Javier</creatorcontrib><title>Single-Shot Metric Depth from Focused Plenoptic Cameras</title><title>arXiv.org</title><description>Metric depth estimation from visual sensors is crucial for robots to perceive, navigate, and interact with their environment. Traditional range imaging setups, such as stereo or structured light cameras, face hassles including calibration, occlusions, and hardware demands, with accuracy limited by the baseline between cameras. Single- and multi-view monocular depth offers a more compact alternative, but is constrained by the unobservability of the metric scale. Light field imaging provides a promising solution for estimating metric depth by using a unique lens configuration through a single device. However, its application to single-view dense metric depth is under-addressed mainly due to the technology's high cost, the lack of public benchmarks, and proprietary geometrical models and software. Our work explores the potential of focused plenoptic cameras for dense metric depth. We propose a novel pipeline that predicts metric depth from a single plenoptic camera shot by first generating a sparse metric point cloud using machine learning, which is then used to scale and align a dense relative depth map regressed by a foundation depth model, resulting in dense metric depth. To validate it, we curated the Light Field & Stereo Image Dataset (LFS) of real-world light field images with stereo depth labels, filling a current gap in existing resources. Experimental results show that our pipeline produces accurate metric depth predictions, laying a solid groundwork for future research in this field.</description><subject>Cameras</subject><subject>Estimation</subject><subject>Machine learning</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mQwD87MS89J1Q3OyC9R8E0tKcpMVnBJLSjJUEgrys9VcMtPLi1OTVEIyEnNyy8oAUo6J-amFiUW8zCwpiXmFKfyQmluBmU31xBnD92CovzC0tTikvis_NKiPKBUvLGhiYGZmZGBibkxcaoAyoc1Kw</recordid><startdate>20241203</startdate><enddate>20241203</enddate><creator>Lasheras-Hernandez, Blanca</creator><creator>Strobl, Klaus H</creator><creator>Izquierdo, Sergio</creator><creator>Bodenmüller, Tim</creator><creator>Triebel, Rudolph</creator><creator>Civera, Javier</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20241203</creationdate><title>Single-Shot Metric Depth from Focused Plenoptic Cameras</title><author>Lasheras-Hernandez, Blanca ; Strobl, Klaus H ; Izquierdo, Sergio ; Bodenmüller, Tim ; Triebel, Rudolph ; Civera, Javier</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_31406620473</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Cameras</topic><topic>Estimation</topic><topic>Machine learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Lasheras-Hernandez, Blanca</creatorcontrib><creatorcontrib>Strobl, Klaus H</creatorcontrib><creatorcontrib>Izquierdo, Sergio</creatorcontrib><creatorcontrib>Bodenmüller, Tim</creatorcontrib><creatorcontrib>Triebel, Rudolph</creatorcontrib><creatorcontrib>Civera, Javier</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Lasheras-Hernandez, Blanca</au><au>Strobl, Klaus H</au><au>Izquierdo, Sergio</au><au>Bodenmüller, Tim</au><au>Triebel, Rudolph</au><au>Civera, Javier</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Single-Shot Metric Depth from Focused Plenoptic Cameras</atitle><jtitle>arXiv.org</jtitle><date>2024-12-03</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Metric depth estimation from visual sensors is crucial for robots to perceive, navigate, and interact with their environment. Traditional range imaging setups, such as stereo or structured light cameras, face hassles including calibration, occlusions, and hardware demands, with accuracy limited by the baseline between cameras. Single- and multi-view monocular depth offers a more compact alternative, but is constrained by the unobservability of the metric scale. Light field imaging provides a promising solution for estimating metric depth by using a unique lens configuration through a single device. However, its application to single-view dense metric depth is under-addressed mainly due to the technology's high cost, the lack of public benchmarks, and proprietary geometrical models and software. Our work explores the potential of focused plenoptic cameras for dense metric depth. We propose a novel pipeline that predicts metric depth from a single plenoptic camera shot by first generating a sparse metric point cloud using machine learning, which is then used to scale and align a dense relative depth map regressed by a foundation depth model, resulting in dense metric depth. To validate it, we curated the Light Field & Stereo Image Dataset (LFS) of real-world light field images with stereo depth labels, filling a current gap in existing resources. Experimental results show that our pipeline produces accurate metric depth predictions, laying a solid groundwork for future research in this field.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-12 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_3140662047 |
source | Free E- Journals |
subjects | Cameras Estimation Machine learning |
title | Single-Shot Metric Depth from Focused Plenoptic Cameras |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-13T18%3A20%3A25IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Single-Shot%20Metric%20Depth%20from%20Focused%20Plenoptic%20Cameras&rft.jtitle=arXiv.org&rft.au=Lasheras-Hernandez,%20Blanca&rft.date=2024-12-03&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3140662047%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3140662047&rft_id=info:pmid/&rfr_iscdi=true |