Single-photon 3D imaging with deep sensor fusion

Sensors which capture 3D scene information provide useful data for tasks in vehicle navigation, gesture recognition, human pose estimation, and geometric reconstruction. Active illumination time-of-flight sensors in particular have become widely used to estimate a 3D representation of a scene. Howev...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:ACM transactions on graphics 2018-08, Vol.37 (4), p.1-12
Hauptverfasser: Lindell, David B., O'Toole, Matthew, Wetzstein, Gordon
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 12
container_issue 4
container_start_page 1
container_title ACM transactions on graphics
container_volume 37
creator Lindell, David B.
O'Toole, Matthew
Wetzstein, Gordon
description Sensors which capture 3D scene information provide useful data for tasks in vehicle navigation, gesture recognition, human pose estimation, and geometric reconstruction. Active illumination time-of-flight sensors in particular have become widely used to estimate a 3D representation of a scene. However, the maximum range, density of acquired spatial samples, and overall acquisition time of these sensors is fundamentally limited by the minimum signal required to estimate depth reliably. In this paper, we propose a data-driven method for photon-efficient 3D imaging which leverages sensor fusion and computational reconstruction to rapidly and robustly estimate a dense depth map from low photon counts. Our sensor fusion approach uses measurements of single photon arrival times from a low-resolution single-photon detector array and an intensity image from a conventional high-resolution camera. Using a multi-scale deep convolutional network, we jointly process the raw measurements from both sensors and output a high-resolution depth map. To demonstrate the efficacy of our approach, we implement a hardware prototype and show results using captured data. At low signal-to-background levels, our depth reconstruction algorithm with sensor fusion outperforms other methods for depth estimation from noisy measurements of photon arrival times.
doi_str_mv 10.1145/3197517.3201316
format Article
fullrecord <record><control><sourceid>crossref</sourceid><recordid>TN_cdi_crossref_primary_10_1145_3197517_3201316</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>10_1145_3197517_3201316</sourcerecordid><originalsourceid>FETCH-LOGICAL-c241t-a2d751a0f163d9650c8fcd6b33c1d35163ee129523e0f3aa96a790133f8343e23</originalsourceid><addsrcrecordid>eNotj0tLxDAURoMoWEfXbvMHMnNvbh7tUsYnDLhQ1yG2yUxlbEpTEf-9Ebv64Cw-zmHsGmGNqPSGsLEa7ZokIKE5YRVqbYUlU5-yCiyBAAI8Zxc5fwCAUcpUDF76YX8MYjykOQ2cbnn_6feF8e9-PvAuhJHnMOQ08fiV-zRcsrPojzlcLbtib_d3r9tHsXt-eNre7EQrFc7Cy67YeIhoqGuMhraObWfeiVrsSBcaAspGSwoQyfvGeNsUcYo1KQqSVmzz_9tOKecpRDdORW36cQjuL9gtwW4Jpl8GA0Zb</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Single-photon 3D imaging with deep sensor fusion</title><source>ACM Digital Library Complete</source><creator>Lindell, David B. ; O'Toole, Matthew ; Wetzstein, Gordon</creator><creatorcontrib>Lindell, David B. ; O'Toole, Matthew ; Wetzstein, Gordon</creatorcontrib><description>Sensors which capture 3D scene information provide useful data for tasks in vehicle navigation, gesture recognition, human pose estimation, and geometric reconstruction. Active illumination time-of-flight sensors in particular have become widely used to estimate a 3D representation of a scene. However, the maximum range, density of acquired spatial samples, and overall acquisition time of these sensors is fundamentally limited by the minimum signal required to estimate depth reliably. In this paper, we propose a data-driven method for photon-efficient 3D imaging which leverages sensor fusion and computational reconstruction to rapidly and robustly estimate a dense depth map from low photon counts. Our sensor fusion approach uses measurements of single photon arrival times from a low-resolution single-photon detector array and an intensity image from a conventional high-resolution camera. Using a multi-scale deep convolutional network, we jointly process the raw measurements from both sensors and output a high-resolution depth map. To demonstrate the efficacy of our approach, we implement a hardware prototype and show results using captured data. At low signal-to-background levels, our depth reconstruction algorithm with sensor fusion outperforms other methods for depth estimation from noisy measurements of photon arrival times.</description><identifier>ISSN: 0730-0301</identifier><identifier>EISSN: 1557-7368</identifier><identifier>DOI: 10.1145/3197517.3201316</identifier><language>eng</language><ispartof>ACM transactions on graphics, 2018-08, Vol.37 (4), p.1-12</ispartof><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c241t-a2d751a0f163d9650c8fcd6b33c1d35163ee129523e0f3aa96a790133f8343e23</citedby><cites>FETCH-LOGICAL-c241t-a2d751a0f163d9650c8fcd6b33c1d35163ee129523e0f3aa96a790133f8343e23</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,27901,27902</link.rule.ids></links><search><creatorcontrib>Lindell, David B.</creatorcontrib><creatorcontrib>O'Toole, Matthew</creatorcontrib><creatorcontrib>Wetzstein, Gordon</creatorcontrib><title>Single-photon 3D imaging with deep sensor fusion</title><title>ACM transactions on graphics</title><description>Sensors which capture 3D scene information provide useful data for tasks in vehicle navigation, gesture recognition, human pose estimation, and geometric reconstruction. Active illumination time-of-flight sensors in particular have become widely used to estimate a 3D representation of a scene. However, the maximum range, density of acquired spatial samples, and overall acquisition time of these sensors is fundamentally limited by the minimum signal required to estimate depth reliably. In this paper, we propose a data-driven method for photon-efficient 3D imaging which leverages sensor fusion and computational reconstruction to rapidly and robustly estimate a dense depth map from low photon counts. Our sensor fusion approach uses measurements of single photon arrival times from a low-resolution single-photon detector array and an intensity image from a conventional high-resolution camera. Using a multi-scale deep convolutional network, we jointly process the raw measurements from both sensors and output a high-resolution depth map. To demonstrate the efficacy of our approach, we implement a hardware prototype and show results using captured data. At low signal-to-background levels, our depth reconstruction algorithm with sensor fusion outperforms other methods for depth estimation from noisy measurements of photon arrival times.</description><issn>0730-0301</issn><issn>1557-7368</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><recordid>eNotj0tLxDAURoMoWEfXbvMHMnNvbh7tUsYnDLhQ1yG2yUxlbEpTEf-9Ebv64Cw-zmHsGmGNqPSGsLEa7ZokIKE5YRVqbYUlU5-yCiyBAAI8Zxc5fwCAUcpUDF76YX8MYjykOQ2cbnn_6feF8e9-PvAuhJHnMOQ08fiV-zRcsrPojzlcLbtib_d3r9tHsXt-eNre7EQrFc7Cy67YeIhoqGuMhraObWfeiVrsSBcaAspGSwoQyfvGeNsUcYo1KQqSVmzz_9tOKecpRDdORW36cQjuL9gtwW4Jpl8GA0Zb</recordid><startdate>20180831</startdate><enddate>20180831</enddate><creator>Lindell, David B.</creator><creator>O'Toole, Matthew</creator><creator>Wetzstein, Gordon</creator><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>20180831</creationdate><title>Single-photon 3D imaging with deep sensor fusion</title><author>Lindell, David B. ; O'Toole, Matthew ; Wetzstein, Gordon</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c241t-a2d751a0f163d9650c8fcd6b33c1d35163ee129523e0f3aa96a790133f8343e23</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Lindell, David B.</creatorcontrib><creatorcontrib>O'Toole, Matthew</creatorcontrib><creatorcontrib>Wetzstein, Gordon</creatorcontrib><collection>CrossRef</collection><jtitle>ACM transactions on graphics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Lindell, David B.</au><au>O'Toole, Matthew</au><au>Wetzstein, Gordon</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Single-photon 3D imaging with deep sensor fusion</atitle><jtitle>ACM transactions on graphics</jtitle><date>2018-08-31</date><risdate>2018</risdate><volume>37</volume><issue>4</issue><spage>1</spage><epage>12</epage><pages>1-12</pages><issn>0730-0301</issn><eissn>1557-7368</eissn><abstract>Sensors which capture 3D scene information provide useful data for tasks in vehicle navigation, gesture recognition, human pose estimation, and geometric reconstruction. Active illumination time-of-flight sensors in particular have become widely used to estimate a 3D representation of a scene. However, the maximum range, density of acquired spatial samples, and overall acquisition time of these sensors is fundamentally limited by the minimum signal required to estimate depth reliably. In this paper, we propose a data-driven method for photon-efficient 3D imaging which leverages sensor fusion and computational reconstruction to rapidly and robustly estimate a dense depth map from low photon counts. Our sensor fusion approach uses measurements of single photon arrival times from a low-resolution single-photon detector array and an intensity image from a conventional high-resolution camera. Using a multi-scale deep convolutional network, we jointly process the raw measurements from both sensors and output a high-resolution depth map. To demonstrate the efficacy of our approach, we implement a hardware prototype and show results using captured data. At low signal-to-background levels, our depth reconstruction algorithm with sensor fusion outperforms other methods for depth estimation from noisy measurements of photon arrival times.</abstract><doi>10.1145/3197517.3201316</doi><tpages>12</tpages></addata></record>
fulltext fulltext
identifier ISSN: 0730-0301
ispartof ACM transactions on graphics, 2018-08, Vol.37 (4), p.1-12
issn 0730-0301
1557-7368
language eng
recordid cdi_crossref_primary_10_1145_3197517_3201316
source ACM Digital Library Complete
title Single-photon 3D imaging with deep sensor fusion
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-30T04%3A22%3A26IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Single-photon%203D%20imaging%20with%20deep%20sensor%20fusion&rft.jtitle=ACM%20transactions%20on%20graphics&rft.au=Lindell,%20David%20B.&rft.date=2018-08-31&rft.volume=37&rft.issue=4&rft.spage=1&rft.epage=12&rft.pages=1-12&rft.issn=0730-0301&rft.eissn=1557-7368&rft_id=info:doi/10.1145/3197517.3201316&rft_dat=%3Ccrossref%3E10_1145_3197517_3201316%3C/crossref%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true