Graph-Based Depth Denoising & Dequantization for Point Cloud Enhancement

A 3D point cloud is typically constructed from depth measurements acquired by sensors at one or more viewpoints. The measurements suffer from both quantization and noise corruption. To improve quality, previous works denoise a point cloud a posteriori after projecting the imperfect depth data onto 3...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing 2022, Vol.31, p.6863-6878
Hauptverfasser: Zhang, Xue, Cheung, Gene, Pang, Jiahao, Sanghvi, Yash, Gnanasambandam, Abhiram, Chan, Stanley H.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 6878
container_issue
container_start_page 6863
container_title IEEE transactions on image processing
container_volume 31
creator Zhang, Xue
Cheung, Gene
Pang, Jiahao
Sanghvi, Yash
Gnanasambandam, Abhiram
Chan, Stanley H.
description A 3D point cloud is typically constructed from depth measurements acquired by sensors at one or more viewpoints. The measurements suffer from both quantization and noise corruption. To improve quality, previous works denoise a point cloud a posteriori after projecting the imperfect depth data onto 3D space. Instead, we enhance depth measurements directly on the sensed images a priori, before synthesizing a 3D point cloud. By enhancing near the physical sensing process, we tailor our optimization to our depth formation model before subsequent processing steps that obscure measurement errors. Specifically, we model depth formation as a combined process of signal-dependent noise addition and non-uniform log-based quantization. The designed model is validated (with parameters fitted) using collected empirical data from a representative depth sensor. To enhance each pixel row in a depth image, we first encode intra-view similarities between available row pixels as edge weights via feature graph learning. We next establish inter-view similarities with another rectified depth image via viewpoint mapping and sparse linear interpolation. This leads to a maximum a posteriori (MAP) graph filtering objective that is convex and differentiable. We minimize the objective efficiently using accelerated gradient descent (AGD), where the optimal step size is approximated via Gershgorin circle theorem (GCT). Experiments show that our method significantly outperformed recent point cloud denoising schemes and state-of-the-art image denoising schemes in two established point cloud quality metrics.
doi_str_mv 10.1109/TIP.2022.3214077
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_9932276</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9932276</ieee_id><sourcerecordid>2730314072</sourcerecordid><originalsourceid>FETCH-LOGICAL-c324t-ece5ca7098dc9b44c67d9e521058867ce258b19b4e8e3d8342805bcceee292e63</originalsourceid><addsrcrecordid>eNpdkM9LwzAYhoMobk7vgpeCIF4686tNc9Q5t8HAHeY5ZOk3l9ElXdMe9K83ZeJBCMkHeb6XlwehW4LHhGD5tF6sxhRTOmaUcCzEGRoSyUmKMafnccaZSAXhcoCuQthjTHhG8ks0YDnD_Rmi-azR9S590QHK5BXqdhdv522w7jN5iPOx066137q13iVb3yQrb12bTCrflcnU7bQzcADXXqOLra4C3Py-I_TxNl1P5unyfbaYPC9TwyhvUzCQGS2wLEojN5ybXJQSMhqrFkUuDNCs2JD4AwWwsmCcFjjbGAMAVFLI2Qg9nnLrxh87CK062GCgqrQD3wVFBcOsl0Ejev8P3fuucbFdT5Ei47noA_GJMo0PoYGtqht70M2XIlj1llW0rHrL6tdyXLk7rdhY6w-XklEaA38AMV50yg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2731854676</pqid></control><display><type>article</type><title>Graph-Based Depth Denoising &amp; Dequantization for Point Cloud Enhancement</title><source>IEEE Electronic Library (IEL)</source><creator>Zhang, Xue ; Cheung, Gene ; Pang, Jiahao ; Sanghvi, Yash ; Gnanasambandam, Abhiram ; Chan, Stanley H.</creator><creatorcontrib>Zhang, Xue ; Cheung, Gene ; Pang, Jiahao ; Sanghvi, Yash ; Gnanasambandam, Abhiram ; Chan, Stanley H.</creatorcontrib><description>A 3D point cloud is typically constructed from depth measurements acquired by sensors at one or more viewpoints. The measurements suffer from both quantization and noise corruption. To improve quality, previous works denoise a point cloud a posteriori after projecting the imperfect depth data onto 3D space. Instead, we enhance depth measurements directly on the sensed images a priori, before synthesizing a 3D point cloud. By enhancing near the physical sensing process, we tailor our optimization to our depth formation model before subsequent processing steps that obscure measurement errors. Specifically, we model depth formation as a combined process of signal-dependent noise addition and non-uniform log-based quantization. The designed model is validated (with parameters fitted) using collected empirical data from a representative depth sensor. To enhance each pixel row in a depth image, we first encode intra-view similarities between available row pixels as edge weights via feature graph learning. We next establish inter-view similarities with another rectified depth image via viewpoint mapping and sparse linear interpolation. This leads to a maximum a posteriori (MAP) graph filtering objective that is convex and differentiable. We minimize the objective efficiently using accelerated gradient descent (AGD), where the optimal step size is approximated via Gershgorin circle theorem (GCT). Experiments show that our method significantly outperformed recent point cloud denoising schemes and state-of-the-art image denoising schemes in two established point cloud quality metrics.</description><identifier>ISSN: 1057-7149</identifier><identifier>EISSN: 1941-0042</identifier><identifier>DOI: 10.1109/TIP.2022.3214077</identifier><identifier>PMID: 36306306</identifier><identifier>CODEN: IIPRE4</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>3D point cloud ; depth sensing ; graph signal processing ; Image enhancement ; Image sensors ; Interpolation ; Measurement ; Noise measurement ; Noise reduction ; non-uniform quantization ; Optimization ; Pixels ; Point cloud compression ; Quantization (signal) ; Sensors ; Signal processing ; signal-dependent noise ; Similarity ; Three dimensional models ; Three-dimensional displays</subject><ispartof>IEEE transactions on image processing, 2022, Vol.31, p.6863-6878</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c324t-ece5ca7098dc9b44c67d9e521058867ce258b19b4e8e3d8342805bcceee292e63</citedby><cites>FETCH-LOGICAL-c324t-ece5ca7098dc9b44c67d9e521058867ce258b19b4e8e3d8342805bcceee292e63</cites><orcidid>0000-0002-8857-1152 ; 0000-0002-7384-3573 ; 0000-0002-8220-7325 ; 0000-0001-5876-2073 ; 0000-0002-6579-7845 ; 0000-0002-5571-4137</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9932276$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,4024,27923,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9932276$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Zhang, Xue</creatorcontrib><creatorcontrib>Cheung, Gene</creatorcontrib><creatorcontrib>Pang, Jiahao</creatorcontrib><creatorcontrib>Sanghvi, Yash</creatorcontrib><creatorcontrib>Gnanasambandam, Abhiram</creatorcontrib><creatorcontrib>Chan, Stanley H.</creatorcontrib><title>Graph-Based Depth Denoising &amp; Dequantization for Point Cloud Enhancement</title><title>IEEE transactions on image processing</title><addtitle>TIP</addtitle><description>A 3D point cloud is typically constructed from depth measurements acquired by sensors at one or more viewpoints. The measurements suffer from both quantization and noise corruption. To improve quality, previous works denoise a point cloud a posteriori after projecting the imperfect depth data onto 3D space. Instead, we enhance depth measurements directly on the sensed images a priori, before synthesizing a 3D point cloud. By enhancing near the physical sensing process, we tailor our optimization to our depth formation model before subsequent processing steps that obscure measurement errors. Specifically, we model depth formation as a combined process of signal-dependent noise addition and non-uniform log-based quantization. The designed model is validated (with parameters fitted) using collected empirical data from a representative depth sensor. To enhance each pixel row in a depth image, we first encode intra-view similarities between available row pixels as edge weights via feature graph learning. We next establish inter-view similarities with another rectified depth image via viewpoint mapping and sparse linear interpolation. This leads to a maximum a posteriori (MAP) graph filtering objective that is convex and differentiable. We minimize the objective efficiently using accelerated gradient descent (AGD), where the optimal step size is approximated via Gershgorin circle theorem (GCT). Experiments show that our method significantly outperformed recent point cloud denoising schemes and state-of-the-art image denoising schemes in two established point cloud quality metrics.</description><subject>3D point cloud</subject><subject>depth sensing</subject><subject>graph signal processing</subject><subject>Image enhancement</subject><subject>Image sensors</subject><subject>Interpolation</subject><subject>Measurement</subject><subject>Noise measurement</subject><subject>Noise reduction</subject><subject>non-uniform quantization</subject><subject>Optimization</subject><subject>Pixels</subject><subject>Point cloud compression</subject><subject>Quantization (signal)</subject><subject>Sensors</subject><subject>Signal processing</subject><subject>signal-dependent noise</subject><subject>Similarity</subject><subject>Three dimensional models</subject><subject>Three-dimensional displays</subject><issn>1057-7149</issn><issn>1941-0042</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpdkM9LwzAYhoMobk7vgpeCIF4686tNc9Q5t8HAHeY5ZOk3l9ElXdMe9K83ZeJBCMkHeb6XlwehW4LHhGD5tF6sxhRTOmaUcCzEGRoSyUmKMafnccaZSAXhcoCuQthjTHhG8ks0YDnD_Rmi-azR9S590QHK5BXqdhdv522w7jN5iPOx066137q13iVb3yQrb12bTCrflcnU7bQzcADXXqOLra4C3Py-I_TxNl1P5unyfbaYPC9TwyhvUzCQGS2wLEojN5ybXJQSMhqrFkUuDNCs2JD4AwWwsmCcFjjbGAMAVFLI2Qg9nnLrxh87CK062GCgqrQD3wVFBcOsl0Ejev8P3fuucbFdT5Ei47noA_GJMo0PoYGtqht70M2XIlj1llW0rHrL6tdyXLk7rdhY6w-XklEaA38AMV50yg</recordid><startdate>2022</startdate><enddate>2022</enddate><creator>Zhang, Xue</creator><creator>Cheung, Gene</creator><creator>Pang, Jiahao</creator><creator>Sanghvi, Yash</creator><creator>Gnanasambandam, Abhiram</creator><creator>Chan, Stanley H.</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-8857-1152</orcidid><orcidid>https://orcid.org/0000-0002-7384-3573</orcidid><orcidid>https://orcid.org/0000-0002-8220-7325</orcidid><orcidid>https://orcid.org/0000-0001-5876-2073</orcidid><orcidid>https://orcid.org/0000-0002-6579-7845</orcidid><orcidid>https://orcid.org/0000-0002-5571-4137</orcidid></search><sort><creationdate>2022</creationdate><title>Graph-Based Depth Denoising &amp; Dequantization for Point Cloud Enhancement</title><author>Zhang, Xue ; Cheung, Gene ; Pang, Jiahao ; Sanghvi, Yash ; Gnanasambandam, Abhiram ; Chan, Stanley H.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c324t-ece5ca7098dc9b44c67d9e521058867ce258b19b4e8e3d8342805bcceee292e63</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>3D point cloud</topic><topic>depth sensing</topic><topic>graph signal processing</topic><topic>Image enhancement</topic><topic>Image sensors</topic><topic>Interpolation</topic><topic>Measurement</topic><topic>Noise measurement</topic><topic>Noise reduction</topic><topic>non-uniform quantization</topic><topic>Optimization</topic><topic>Pixels</topic><topic>Point cloud compression</topic><topic>Quantization (signal)</topic><topic>Sensors</topic><topic>Signal processing</topic><topic>signal-dependent noise</topic><topic>Similarity</topic><topic>Three dimensional models</topic><topic>Three-dimensional displays</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Xue</creatorcontrib><creatorcontrib>Cheung, Gene</creatorcontrib><creatorcontrib>Pang, Jiahao</creatorcontrib><creatorcontrib>Sanghvi, Yash</creatorcontrib><creatorcontrib>Gnanasambandam, Abhiram</creatorcontrib><creatorcontrib>Chan, Stanley H.</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on image processing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhang, Xue</au><au>Cheung, Gene</au><au>Pang, Jiahao</au><au>Sanghvi, Yash</au><au>Gnanasambandam, Abhiram</au><au>Chan, Stanley H.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Graph-Based Depth Denoising &amp; Dequantization for Point Cloud Enhancement</atitle><jtitle>IEEE transactions on image processing</jtitle><stitle>TIP</stitle><date>2022</date><risdate>2022</risdate><volume>31</volume><spage>6863</spage><epage>6878</epage><pages>6863-6878</pages><issn>1057-7149</issn><eissn>1941-0042</eissn><coden>IIPRE4</coden><abstract>A 3D point cloud is typically constructed from depth measurements acquired by sensors at one or more viewpoints. The measurements suffer from both quantization and noise corruption. To improve quality, previous works denoise a point cloud a posteriori after projecting the imperfect depth data onto 3D space. Instead, we enhance depth measurements directly on the sensed images a priori, before synthesizing a 3D point cloud. By enhancing near the physical sensing process, we tailor our optimization to our depth formation model before subsequent processing steps that obscure measurement errors. Specifically, we model depth formation as a combined process of signal-dependent noise addition and non-uniform log-based quantization. The designed model is validated (with parameters fitted) using collected empirical data from a representative depth sensor. To enhance each pixel row in a depth image, we first encode intra-view similarities between available row pixels as edge weights via feature graph learning. We next establish inter-view similarities with another rectified depth image via viewpoint mapping and sparse linear interpolation. This leads to a maximum a posteriori (MAP) graph filtering objective that is convex and differentiable. We minimize the objective efficiently using accelerated gradient descent (AGD), where the optimal step size is approximated via Gershgorin circle theorem (GCT). Experiments show that our method significantly outperformed recent point cloud denoising schemes and state-of-the-art image denoising schemes in two established point cloud quality metrics.</abstract><cop>New York</cop><pub>IEEE</pub><pmid>36306306</pmid><doi>10.1109/TIP.2022.3214077</doi><tpages>16</tpages><orcidid>https://orcid.org/0000-0002-8857-1152</orcidid><orcidid>https://orcid.org/0000-0002-7384-3573</orcidid><orcidid>https://orcid.org/0000-0002-8220-7325</orcidid><orcidid>https://orcid.org/0000-0001-5876-2073</orcidid><orcidid>https://orcid.org/0000-0002-6579-7845</orcidid><orcidid>https://orcid.org/0000-0002-5571-4137</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1057-7149
ispartof IEEE transactions on image processing, 2022, Vol.31, p.6863-6878
issn 1057-7149
1941-0042
language eng
recordid cdi_ieee_primary_9932276
source IEEE Electronic Library (IEL)
subjects 3D point cloud
depth sensing
graph signal processing
Image enhancement
Image sensors
Interpolation
Measurement
Noise measurement
Noise reduction
non-uniform quantization
Optimization
Pixels
Point cloud compression
Quantization (signal)
Sensors
Signal processing
signal-dependent noise
Similarity
Three dimensional models
Three-dimensional displays
title Graph-Based Depth Denoising & Dequantization for Point Cloud Enhancement
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T21%3A24%3A22IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Graph-Based%20Depth%20Denoising%20&%20Dequantization%20for%20Point%20Cloud%20Enhancement&rft.jtitle=IEEE%20transactions%20on%20image%20processing&rft.au=Zhang,%20Xue&rft.date=2022&rft.volume=31&rft.spage=6863&rft.epage=6878&rft.pages=6863-6878&rft.issn=1057-7149&rft.eissn=1941-0042&rft.coden=IIPRE4&rft_id=info:doi/10.1109/TIP.2022.3214077&rft_dat=%3Cproquest_RIE%3E2730314072%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2731854676&rft_id=info:pmid/36306306&rft_ieee_id=9932276&rfr_iscdi=true