Depth Restoration in Under-Display Time-of-Flight Imaging
Under-display imaging has recently received considerable attention in both academia and industry. As a variation of this technique, under-display ToF (UD-ToF) cameras enable depth sensing for full-screen devices. However, it also brings problems of image blurring, signal-to-noise ratio and ranging a...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on pattern analysis and machine intelligence 2023-05, Vol.45 (5), p.5668-5683 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 5683 |
---|---|
container_issue | 5 |
container_start_page | 5668 |
container_title | IEEE transactions on pattern analysis and machine intelligence |
container_volume | 45 |
creator | Qiao, Xin Ge, Chenyang Deng, Pengchao Wei, Hao Poggi, Matteo Mattoccia, Stefano |
description | Under-display imaging has recently received considerable attention in both academia and industry. As a variation of this technique, under-display ToF (UD-ToF) cameras enable depth sensing for full-screen devices. However, it also brings problems of image blurring, signal-to-noise ratio and ranging accuracy reduction. To address these issues, we propose a cascaded deep network to improve the quality of UD-ToF depth maps. The network comprises two subnets, with the first using a complex-valued network in raw domain to perform denoising, deblurring and raw measurements enhancement jointly, while the second refining depth maps in depth domain based on the proposed multi-scale depth enhancement block (MSDEB). To enable training, we establish a data acquisition device and construct a real UD-ToF dataset by collecting real paired ToF raw data. Besides, we also build a large-scale synthetic UD-ToF dataset through noise analysis. The quantitative and qualitative evaluation results on public datasets and ours demonstrate that the presented network outperforms state-of-the-art algorithms and can further promote full-screen devices in practical applications. |
doi_str_mv | 10.1109/TPAMI.2022.3209905 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2795778845</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9903562</ieee_id><sourcerecordid>2795778845</sourcerecordid><originalsourceid>FETCH-LOGICAL-c395t-8251cb10066f0482f4948caf8642218e3ea63d7631f37a412012f166812e12543</originalsourceid><addsrcrecordid>eNpdkEtPwkAUhSdGI4j-AU1MEzduinPvPDqzNCBKgtEYWDdDmcKQvuy0C_69RZCFq7u43zk5-Qi5BToEoPpp_vn8Ph0iRRwypFpTcUb6oJkOmWD6nPQpSAyVQtUjV95vKQUuKLskPSZBCB5FfaLHtmo2wZf1TVmbxpVF4IpgUaxsHY6drzKzC-Yut2GZhpPMrTdNMM3N2hXra3KRmszbm-MdkMXkZT56C2cfr9PR8yxMmBZNqFBAsgRKpUwpV5hyzVViUiU5IijLrJFsFUkGKYsMB6SAKUipAC2g4GxAHg-9VV1-t93OOHc-sVlmClu2PsYIlGQRBdGhD__QbdnWRbeuo7SIIqX4nsIDldSl97VN46p2ual3MdB4Lzb-FRvvxcZHsV3o_ljdLnO7OkX-THbA3QFw1trTu8syIZH9ANv7eI0</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2795778845</pqid></control><display><type>article</type><title>Depth Restoration in Under-Display Time-of-Flight Imaging</title><source>IEEE Electronic Library (IEL)</source><creator>Qiao, Xin ; Ge, Chenyang ; Deng, Pengchao ; Wei, Hao ; Poggi, Matteo ; Mattoccia, Stefano</creator><creatorcontrib>Qiao, Xin ; Ge, Chenyang ; Deng, Pengchao ; Wei, Hao ; Poggi, Matteo ; Mattoccia, Stefano</creatorcontrib><description>Under-display imaging has recently received considerable attention in both academia and industry. As a variation of this technique, under-display ToF (UD-ToF) cameras enable depth sensing for full-screen devices. However, it also brings problems of image blurring, signal-to-noise ratio and ranging accuracy reduction. To address these issues, we propose a cascaded deep network to improve the quality of UD-ToF depth maps. The network comprises two subnets, with the first using a complex-valued network in raw domain to perform denoising, deblurring and raw measurements enhancement jointly, while the second refining depth maps in depth domain based on the proposed multi-scale depth enhancement block (MSDEB). To enable training, we establish a data acquisition device and construct a real UD-ToF dataset by collecting real paired ToF raw data. Besides, we also build a large-scale synthetic UD-ToF dataset through noise analysis. The quantitative and qualitative evaluation results on public datasets and ours demonstrate that the presented network outperforms state-of-the-art algorithms and can further promote full-screen devices in practical applications.</description><identifier>ISSN: 0162-8828</identifier><identifier>EISSN: 1939-3539</identifier><identifier>EISSN: 2160-9292</identifier><identifier>DOI: 10.1109/TPAMI.2022.3209905</identifier><identifier>PMID: 36155477</identifier><identifier>CODEN: ITPIDJ</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Algorithms ; Blurring ; Cameras ; CNN ; Data acquisition ; Datasets ; denoising ; depth restoration ; Domains ; Image restoration ; Imaging ; Noise measurement ; Noise reduction ; Qualitative analysis ; Sensors ; Signal to noise ratio ; Task analysis ; Time-of-flight ; under display</subject><ispartof>IEEE transactions on pattern analysis and machine intelligence, 2023-05, Vol.45 (5), p.5668-5683</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c395t-8251cb10066f0482f4948caf8642218e3ea63d7631f37a412012f166812e12543</citedby><cites>FETCH-LOGICAL-c395t-8251cb10066f0482f4948caf8642218e3ea63d7631f37a412012f166812e12543</cites><orcidid>0000-0003-4775-7302 ; 0000-0002-3337-2236 ; 0000-0002-3681-7704 ; 0000-0002-0044-2947 ; 0000-0002-1747-1218</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9903562$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9903562$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/36155477$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Qiao, Xin</creatorcontrib><creatorcontrib>Ge, Chenyang</creatorcontrib><creatorcontrib>Deng, Pengchao</creatorcontrib><creatorcontrib>Wei, Hao</creatorcontrib><creatorcontrib>Poggi, Matteo</creatorcontrib><creatorcontrib>Mattoccia, Stefano</creatorcontrib><title>Depth Restoration in Under-Display Time-of-Flight Imaging</title><title>IEEE transactions on pattern analysis and machine intelligence</title><addtitle>TPAMI</addtitle><addtitle>IEEE Trans Pattern Anal Mach Intell</addtitle><description>Under-display imaging has recently received considerable attention in both academia and industry. As a variation of this technique, under-display ToF (UD-ToF) cameras enable depth sensing for full-screen devices. However, it also brings problems of image blurring, signal-to-noise ratio and ranging accuracy reduction. To address these issues, we propose a cascaded deep network to improve the quality of UD-ToF depth maps. The network comprises two subnets, with the first using a complex-valued network in raw domain to perform denoising, deblurring and raw measurements enhancement jointly, while the second refining depth maps in depth domain based on the proposed multi-scale depth enhancement block (MSDEB). To enable training, we establish a data acquisition device and construct a real UD-ToF dataset by collecting real paired ToF raw data. Besides, we also build a large-scale synthetic UD-ToF dataset through noise analysis. The quantitative and qualitative evaluation results on public datasets and ours demonstrate that the presented network outperforms state-of-the-art algorithms and can further promote full-screen devices in practical applications.</description><subject>Algorithms</subject><subject>Blurring</subject><subject>Cameras</subject><subject>CNN</subject><subject>Data acquisition</subject><subject>Datasets</subject><subject>denoising</subject><subject>depth restoration</subject><subject>Domains</subject><subject>Image restoration</subject><subject>Imaging</subject><subject>Noise measurement</subject><subject>Noise reduction</subject><subject>Qualitative analysis</subject><subject>Sensors</subject><subject>Signal to noise ratio</subject><subject>Task analysis</subject><subject>Time-of-flight</subject><subject>under display</subject><issn>0162-8828</issn><issn>1939-3539</issn><issn>2160-9292</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpdkEtPwkAUhSdGI4j-AU1MEzduinPvPDqzNCBKgtEYWDdDmcKQvuy0C_69RZCFq7u43zk5-Qi5BToEoPpp_vn8Ph0iRRwypFpTcUb6oJkOmWD6nPQpSAyVQtUjV95vKQUuKLskPSZBCB5FfaLHtmo2wZf1TVmbxpVF4IpgUaxsHY6drzKzC-Yut2GZhpPMrTdNMM3N2hXra3KRmszbm-MdkMXkZT56C2cfr9PR8yxMmBZNqFBAsgRKpUwpV5hyzVViUiU5IijLrJFsFUkGKYsMB6SAKUipAC2g4GxAHg-9VV1-t93OOHc-sVlmClu2PsYIlGQRBdGhD__QbdnWRbeuo7SIIqX4nsIDldSl97VN46p2ual3MdB4Lzb-FRvvxcZHsV3o_ljdLnO7OkX-THbA3QFw1trTu8syIZH9ANv7eI0</recordid><startdate>20230501</startdate><enddate>20230501</enddate><creator>Qiao, Xin</creator><creator>Ge, Chenyang</creator><creator>Deng, Pengchao</creator><creator>Wei, Hao</creator><creator>Poggi, Matteo</creator><creator>Mattoccia, Stefano</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0003-4775-7302</orcidid><orcidid>https://orcid.org/0000-0002-3337-2236</orcidid><orcidid>https://orcid.org/0000-0002-3681-7704</orcidid><orcidid>https://orcid.org/0000-0002-0044-2947</orcidid><orcidid>https://orcid.org/0000-0002-1747-1218</orcidid></search><sort><creationdate>20230501</creationdate><title>Depth Restoration in Under-Display Time-of-Flight Imaging</title><author>Qiao, Xin ; Ge, Chenyang ; Deng, Pengchao ; Wei, Hao ; Poggi, Matteo ; Mattoccia, Stefano</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c395t-8251cb10066f0482f4948caf8642218e3ea63d7631f37a412012f166812e12543</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Algorithms</topic><topic>Blurring</topic><topic>Cameras</topic><topic>CNN</topic><topic>Data acquisition</topic><topic>Datasets</topic><topic>denoising</topic><topic>depth restoration</topic><topic>Domains</topic><topic>Image restoration</topic><topic>Imaging</topic><topic>Noise measurement</topic><topic>Noise reduction</topic><topic>Qualitative analysis</topic><topic>Sensors</topic><topic>Signal to noise ratio</topic><topic>Task analysis</topic><topic>Time-of-flight</topic><topic>under display</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Qiao, Xin</creatorcontrib><creatorcontrib>Ge, Chenyang</creatorcontrib><creatorcontrib>Deng, Pengchao</creatorcontrib><creatorcontrib>Wei, Hao</creatorcontrib><creatorcontrib>Poggi, Matteo</creatorcontrib><creatorcontrib>Mattoccia, Stefano</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on pattern analysis and machine intelligence</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Qiao, Xin</au><au>Ge, Chenyang</au><au>Deng, Pengchao</au><au>Wei, Hao</au><au>Poggi, Matteo</au><au>Mattoccia, Stefano</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Depth Restoration in Under-Display Time-of-Flight Imaging</atitle><jtitle>IEEE transactions on pattern analysis and machine intelligence</jtitle><stitle>TPAMI</stitle><addtitle>IEEE Trans Pattern Anal Mach Intell</addtitle><date>2023-05-01</date><risdate>2023</risdate><volume>45</volume><issue>5</issue><spage>5668</spage><epage>5683</epage><pages>5668-5683</pages><issn>0162-8828</issn><eissn>1939-3539</eissn><eissn>2160-9292</eissn><coden>ITPIDJ</coden><abstract>Under-display imaging has recently received considerable attention in both academia and industry. As a variation of this technique, under-display ToF (UD-ToF) cameras enable depth sensing for full-screen devices. However, it also brings problems of image blurring, signal-to-noise ratio and ranging accuracy reduction. To address these issues, we propose a cascaded deep network to improve the quality of UD-ToF depth maps. The network comprises two subnets, with the first using a complex-valued network in raw domain to perform denoising, deblurring and raw measurements enhancement jointly, while the second refining depth maps in depth domain based on the proposed multi-scale depth enhancement block (MSDEB). To enable training, we establish a data acquisition device and construct a real UD-ToF dataset by collecting real paired ToF raw data. Besides, we also build a large-scale synthetic UD-ToF dataset through noise analysis. The quantitative and qualitative evaluation results on public datasets and ours demonstrate that the presented network outperforms state-of-the-art algorithms and can further promote full-screen devices in practical applications.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>36155477</pmid><doi>10.1109/TPAMI.2022.3209905</doi><tpages>16</tpages><orcidid>https://orcid.org/0000-0003-4775-7302</orcidid><orcidid>https://orcid.org/0000-0002-3337-2236</orcidid><orcidid>https://orcid.org/0000-0002-3681-7704</orcidid><orcidid>https://orcid.org/0000-0002-0044-2947</orcidid><orcidid>https://orcid.org/0000-0002-1747-1218</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 0162-8828 |
ispartof | IEEE transactions on pattern analysis and machine intelligence, 2023-05, Vol.45 (5), p.5668-5683 |
issn | 0162-8828 1939-3539 2160-9292 |
language | eng |
recordid | cdi_proquest_journals_2795778845 |
source | IEEE Electronic Library (IEL) |
subjects | Algorithms Blurring Cameras CNN Data acquisition Datasets denoising depth restoration Domains Image restoration Imaging Noise measurement Noise reduction Qualitative analysis Sensors Signal to noise ratio Task analysis Time-of-flight under display |
title | Depth Restoration in Under-Display Time-of-Flight Imaging |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-25T16%3A04%3A05IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Depth%20Restoration%20in%20Under-Display%20Time-of-Flight%20Imaging&rft.jtitle=IEEE%20transactions%20on%20pattern%20analysis%20and%20machine%20intelligence&rft.au=Qiao,%20Xin&rft.date=2023-05-01&rft.volume=45&rft.issue=5&rft.spage=5668&rft.epage=5683&rft.pages=5668-5683&rft.issn=0162-8828&rft.eissn=1939-3539&rft.coden=ITPIDJ&rft_id=info:doi/10.1109/TPAMI.2022.3209905&rft_dat=%3Cproquest_RIE%3E2795778845%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2795778845&rft_id=info:pmid/36155477&rft_ieee_id=9903562&rfr_iscdi=true |