Learning A Locally Unified 3D Point Cloud for View Synthesis

In this paper, we explore the problem of 3D point cloud representation-based view synthesis from a set of sparse source views. To tackle this challenging problem, we propose a new deep learning-based view synthesis paradigm that learns a locally unified 3D point cloud from source views. Specifically...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing 2023, Vol.32, p.1-1
Hauptverfasser: You, Meng, Guo, Mantang, Lyu, Xianqiang, Liu, Hui, Hou, Junhui
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 1
container_issue
container_start_page 1
container_title IEEE transactions on image processing
container_volume 32
creator You, Meng
Guo, Mantang
Lyu, Xianqiang
Liu, Hui
Hou, Junhui
description In this paper, we explore the problem of 3D point cloud representation-based view synthesis from a set of sparse source views. To tackle this challenging problem, we propose a new deep learning-based view synthesis paradigm that learns a locally unified 3D point cloud from source views. Specifically, we first construct sub-point clouds by projecting source views to 3D space based on their depth maps. Then, we learn the locally unified 3D point cloud by adaptively fusing points at a local neighborhood defined on the union of the sub-point clouds. Besides, we also propose a 3D geometry-guided image restoration module to fill the holes and recover high-frequency details of the rendered novel views. Experimental results on three benchmark datasets demonstrate that our method can improve the average PSNR by more than 4 dB while preserving more accurate visual details, compared with state-of-the-art view synthesis methods. The code will be publicly available at https://github.com/mengyou2/PCVS.
doi_str_mv 10.1109/TIP.2023.3321458
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2876681409</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10274683</ieee_id><sourcerecordid>2876681409</sourcerecordid><originalsourceid>FETCH-LOGICAL-c193t-3da88149a5a7b98b520716b15e6e2957a46bbc355a1348550eb48c5905c87a303</originalsourceid><addsrcrecordid>eNpdkE1LAzEQhoMoWKt3Dx4CXrxsnXxtEvBS6ldhwYKt15Ddppqy3a3JLtJ_b0p7EE8zh-edd3gQuiYwIgT0_Xw6G1GgbMQYJVyoEzQgmpMMgNPTtIOQmSRcn6OLGNcAiSH5AD0UzobGN594jIu2snW9w4vGr7xbYvaIZ61vOjyp236JV23AH9794Pdd03256OMlOlvZOrqr4xyixfPTfPKaFW8v08m4yCqiWZexpVUqVVthZalVKShIkpdEuNxRLaTleVlWTAhLGFdCgCu5qoQGUSlpGbAhujvc3Yb2u3exMxsfK1fXtnFtHw1VUjAFqSyht__QdduHJn23p_I8_QE6UXCgqtDGGNzKbIPf2LAzBMxep0k6zV6nOepMkZtDxDvn_uBU8lwx9gsUbW0w</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2876681409</pqid></control><display><type>article</type><title>Learning A Locally Unified 3D Point Cloud for View Synthesis</title><source>IEEE Electronic Library (IEL)</source><creator>You, Meng ; Guo, Mantang ; Lyu, Xianqiang ; Liu, Hui ; Hou, Junhui</creator><creatorcontrib>You, Meng ; Guo, Mantang ; Lyu, Xianqiang ; Liu, Hui ; Hou, Junhui</creatorcontrib><description>In this paper, we explore the problem of 3D point cloud representation-based view synthesis from a set of sparse source views. To tackle this challenging problem, we propose a new deep learning-based view synthesis paradigm that learns a locally unified 3D point cloud from source views. Specifically, we first construct sub-point clouds by projecting source views to 3D space based on their depth maps. Then, we learn the locally unified 3D point cloud by adaptively fusing points at a local neighborhood defined on the union of the sub-point clouds. Besides, we also propose a 3D geometry-guided image restoration module to fill the holes and recover high-frequency details of the rendered novel views. Experimental results on three benchmark datasets demonstrate that our method can improve the average PSNR by more than 4 dB while preserving more accurate visual details, compared with state-of-the-art view synthesis methods. The code will be publicly available at https://github.com/mengyou2/PCVS.</description><identifier>ISSN: 1057-7149</identifier><identifier>EISSN: 1941-0042</identifier><identifier>DOI: 10.1109/TIP.2023.3321458</identifier><identifier>CODEN: IIPRE4</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>3D point clouds ; Cloud computing ; Deep learning ; Fuses ; Geometry ; Image restoration ; Image-based rendering ; Point cloud compression ; point cloud fusion ; Rendering (computer graphics) ; Synthesis ; Three dimensional models ; Three-dimensional displays ; view synthesis</subject><ispartof>IEEE transactions on image processing, 2023, Vol.32, p.1-1</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c193t-3da88149a5a7b98b520716b15e6e2957a46bbc355a1348550eb48c5905c87a303</cites><orcidid>0000-0003-3431-2021 ; 0000-0003-2159-025X ; 0000-0002-3882-4238 ; 0000-0003-1138-0486 ; 0000-0002-0380-3760</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10274683$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,4009,27902,27903,27904,54736</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10274683$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>You, Meng</creatorcontrib><creatorcontrib>Guo, Mantang</creatorcontrib><creatorcontrib>Lyu, Xianqiang</creatorcontrib><creatorcontrib>Liu, Hui</creatorcontrib><creatorcontrib>Hou, Junhui</creatorcontrib><title>Learning A Locally Unified 3D Point Cloud for View Synthesis</title><title>IEEE transactions on image processing</title><addtitle>TIP</addtitle><description>In this paper, we explore the problem of 3D point cloud representation-based view synthesis from a set of sparse source views. To tackle this challenging problem, we propose a new deep learning-based view synthesis paradigm that learns a locally unified 3D point cloud from source views. Specifically, we first construct sub-point clouds by projecting source views to 3D space based on their depth maps. Then, we learn the locally unified 3D point cloud by adaptively fusing points at a local neighborhood defined on the union of the sub-point clouds. Besides, we also propose a 3D geometry-guided image restoration module to fill the holes and recover high-frequency details of the rendered novel views. Experimental results on three benchmark datasets demonstrate that our method can improve the average PSNR by more than 4 dB while preserving more accurate visual details, compared with state-of-the-art view synthesis methods. The code will be publicly available at https://github.com/mengyou2/PCVS.</description><subject>3D point clouds</subject><subject>Cloud computing</subject><subject>Deep learning</subject><subject>Fuses</subject><subject>Geometry</subject><subject>Image restoration</subject><subject>Image-based rendering</subject><subject>Point cloud compression</subject><subject>point cloud fusion</subject><subject>Rendering (computer graphics)</subject><subject>Synthesis</subject><subject>Three dimensional models</subject><subject>Three-dimensional displays</subject><subject>view synthesis</subject><issn>1057-7149</issn><issn>1941-0042</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpdkE1LAzEQhoMoWKt3Dx4CXrxsnXxtEvBS6ldhwYKt15Ddppqy3a3JLtJ_b0p7EE8zh-edd3gQuiYwIgT0_Xw6G1GgbMQYJVyoEzQgmpMMgNPTtIOQmSRcn6OLGNcAiSH5AD0UzobGN594jIu2snW9w4vGr7xbYvaIZ61vOjyp236JV23AH9794Pdd03256OMlOlvZOrqr4xyixfPTfPKaFW8v08m4yCqiWZexpVUqVVthZalVKShIkpdEuNxRLaTleVlWTAhLGFdCgCu5qoQGUSlpGbAhujvc3Yb2u3exMxsfK1fXtnFtHw1VUjAFqSyht__QdduHJn23p_I8_QE6UXCgqtDGGNzKbIPf2LAzBMxep0k6zV6nOepMkZtDxDvn_uBU8lwx9gsUbW0w</recordid><startdate>2023</startdate><enddate>2023</enddate><creator>You, Meng</creator><creator>Guo, Mantang</creator><creator>Lyu, Xianqiang</creator><creator>Liu, Hui</creator><creator>Hou, Junhui</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0003-3431-2021</orcidid><orcidid>https://orcid.org/0000-0003-2159-025X</orcidid><orcidid>https://orcid.org/0000-0002-3882-4238</orcidid><orcidid>https://orcid.org/0000-0003-1138-0486</orcidid><orcidid>https://orcid.org/0000-0002-0380-3760</orcidid></search><sort><creationdate>2023</creationdate><title>Learning A Locally Unified 3D Point Cloud for View Synthesis</title><author>You, Meng ; Guo, Mantang ; Lyu, Xianqiang ; Liu, Hui ; Hou, Junhui</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c193t-3da88149a5a7b98b520716b15e6e2957a46bbc355a1348550eb48c5905c87a303</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>3D point clouds</topic><topic>Cloud computing</topic><topic>Deep learning</topic><topic>Fuses</topic><topic>Geometry</topic><topic>Image restoration</topic><topic>Image-based rendering</topic><topic>Point cloud compression</topic><topic>point cloud fusion</topic><topic>Rendering (computer graphics)</topic><topic>Synthesis</topic><topic>Three dimensional models</topic><topic>Three-dimensional displays</topic><topic>view synthesis</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>You, Meng</creatorcontrib><creatorcontrib>Guo, Mantang</creatorcontrib><creatorcontrib>Lyu, Xianqiang</creatorcontrib><creatorcontrib>Liu, Hui</creatorcontrib><creatorcontrib>Hou, Junhui</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on image processing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>You, Meng</au><au>Guo, Mantang</au><au>Lyu, Xianqiang</au><au>Liu, Hui</au><au>Hou, Junhui</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Learning A Locally Unified 3D Point Cloud for View Synthesis</atitle><jtitle>IEEE transactions on image processing</jtitle><stitle>TIP</stitle><date>2023</date><risdate>2023</risdate><volume>32</volume><spage>1</spage><epage>1</epage><pages>1-1</pages><issn>1057-7149</issn><eissn>1941-0042</eissn><coden>IIPRE4</coden><abstract>In this paper, we explore the problem of 3D point cloud representation-based view synthesis from a set of sparse source views. To tackle this challenging problem, we propose a new deep learning-based view synthesis paradigm that learns a locally unified 3D point cloud from source views. Specifically, we first construct sub-point clouds by projecting source views to 3D space based on their depth maps. Then, we learn the locally unified 3D point cloud by adaptively fusing points at a local neighborhood defined on the union of the sub-point clouds. Besides, we also propose a 3D geometry-guided image restoration module to fill the holes and recover high-frequency details of the rendered novel views. Experimental results on three benchmark datasets demonstrate that our method can improve the average PSNR by more than 4 dB while preserving more accurate visual details, compared with state-of-the-art view synthesis methods. The code will be publicly available at https://github.com/mengyou2/PCVS.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TIP.2023.3321458</doi><tpages>1</tpages><orcidid>https://orcid.org/0000-0003-3431-2021</orcidid><orcidid>https://orcid.org/0000-0003-2159-025X</orcidid><orcidid>https://orcid.org/0000-0002-3882-4238</orcidid><orcidid>https://orcid.org/0000-0003-1138-0486</orcidid><orcidid>https://orcid.org/0000-0002-0380-3760</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1057-7149
ispartof IEEE transactions on image processing, 2023, Vol.32, p.1-1
issn 1057-7149
1941-0042
language eng
recordid cdi_proquest_journals_2876681409
source IEEE Electronic Library (IEL)
subjects 3D point clouds
Cloud computing
Deep learning
Fuses
Geometry
Image restoration
Image-based rendering
Point cloud compression
point cloud fusion
Rendering (computer graphics)
Synthesis
Three dimensional models
Three-dimensional displays
view synthesis
title Learning A Locally Unified 3D Point Cloud for View Synthesis
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-25T13%3A22%3A59IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Learning%20A%20Locally%20Unified%203D%20Point%20Cloud%20for%20View%20Synthesis&rft.jtitle=IEEE%20transactions%20on%20image%20processing&rft.au=You,%20Meng&rft.date=2023&rft.volume=32&rft.spage=1&rft.epage=1&rft.pages=1-1&rft.issn=1057-7149&rft.eissn=1941-0042&rft.coden=IIPRE4&rft_id=info:doi/10.1109/TIP.2023.3321458&rft_dat=%3Cproquest_RIE%3E2876681409%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2876681409&rft_id=info:pmid/&rft_ieee_id=10274683&rfr_iscdi=true