Joint self-supervised and reference-guided learning for depth inpainting
Depth information can benefit various computer vision tasks on both images and videos. However, depth maps may suffer from invalid values in many pixels, and also large holes. To improve such data, we propose a joint self-supervised and reference-guided learning approach for depth inpainting. For th...
Gespeichert in:
Veröffentlicht in: | Computational Visual Media 2022-12, Vol.8 (4), p.597-612 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 612 |
---|---|
container_issue | 4 |
container_start_page | 597 |
container_title | Computational Visual Media |
container_volume | 8 |
creator | Wu, Heng Fu, Kui Zhao, Yifan Song, Haokun Li, Jia |
description | Depth information can benefit various computer vision tasks on both images and videos. However, depth maps may suffer from invalid values in many pixels, and also large holes. To improve such data, we propose a joint self-supervised and reference-guided learning approach for depth inpainting. For the self-supervised learning strategy, we introduce an improved spatial convolutional sparse coding module in which total variation regularization is employed to enhance the structural information while preserving edge information. This module alternately learns a convolutional dictionary and sparse coding from a corrupted depth map. Then, both the learned convolutional dictionary and sparse coding are convolved to yield an initial depth map, which is effectively smoothed using local contextual information. The reference-guided learning part is inspired by the fact that adjacent pixels with close colors in the RGB image tend to have similar depth values. We thus construct a hierarchical joint bilateral filter module using the corresponding color image to fill in large holes. In summary, our approach integrates a convolutional sparse coding module to preserve local contextual information and a hierarchical joint bilateral filter module for filling using specific adjacent information. Experimental results show that the proposed approach works well for both invalid value restoration and large hole inpainting. |
doi_str_mv | 10.1007/s41095-021-0259-z |
format | Article |
fullrecord | <record><control><sourceid>gale_proqu</sourceid><recordid>TN_cdi_proquest_journals_2685225807</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A731370454</galeid><sourcerecordid>A731370454</sourcerecordid><originalsourceid>FETCH-LOGICAL-c314t-3ff954e756eb636db6eb910f017e85036d7b5f99d2ad17962ade3e8bf4d98e193</originalsourceid><addsrcrecordid>eNp1kF9LwzAUxYsoOOY-gG8Fnzvzp0mbxzHUKQNf9Dm0zU2NdGlNWsF9eu-o4pOEcMPh_JKTkyTXlKwpIcVtzClRIiOM4hYqO54lC0aUzIiU7Pz3nHN-maxidDVhkpeEFmqR7J5658c0QmezOA0QPl0Ek1bepAEsBPANZO3kDIodVME736a2D6mBYXxLnR8q5FG8Si5s1UVY_cxl8np_97LdZfvnh8ftZp81nOZjxq1VIodCSKgll6bGqSixmAZKQVApamGVMqwyGFDiAA5lbXOjSqCKL5Ob-d4h9B8TxFG_91Pw-KRmshSMiZIU6FrPrrbqQDtv-zFUDS4DB9f0HqxDfVNwyguSixwBOgNN6GPEr-shuEMVvjQl-lSynkvWWLI-layPyLCZiej1LYS_KP9D31K6f3E</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2685225807</pqid></control><display><type>article</type><title>Joint self-supervised and reference-guided learning for depth inpainting</title><source>DOAJ Directory of Open Access Journals</source><source>Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals</source><source>Springer Nature OA Free Journals</source><creator>Wu, Heng ; Fu, Kui ; Zhao, Yifan ; Song, Haokun ; Li, Jia</creator><creatorcontrib>Wu, Heng ; Fu, Kui ; Zhao, Yifan ; Song, Haokun ; Li, Jia</creatorcontrib><description>Depth information can benefit various computer vision tasks on both images and videos. However, depth maps may suffer from invalid values in many pixels, and also large holes. To improve such data, we propose a joint self-supervised and reference-guided learning approach for depth inpainting. For the self-supervised learning strategy, we introduce an improved spatial convolutional sparse coding module in which total variation regularization is employed to enhance the structural information while preserving edge information. This module alternately learns a convolutional dictionary and sparse coding from a corrupted depth map. Then, both the learned convolutional dictionary and sparse coding are convolved to yield an initial depth map, which is effectively smoothed using local contextual information. The reference-guided learning part is inspired by the fact that adjacent pixels with close colors in the RGB image tend to have similar depth values. We thus construct a hierarchical joint bilateral filter module using the corresponding color image to fill in large holes. In summary, our approach integrates a convolutional sparse coding module to preserve local contextual information and a hierarchical joint bilateral filter module for filling using specific adjacent information. Experimental results show that the proposed approach works well for both invalid value restoration and large hole inpainting.</description><identifier>ISSN: 2096-0433</identifier><identifier>EISSN: 2096-0662</identifier><identifier>DOI: 10.1007/s41095-021-0259-z</identifier><language>eng</language><publisher>Beijing: Tsinghua University Press</publisher><subject>Artificial Intelligence ; Coding ; Color imagery ; Computer Graphics ; Computer Science ; Computer vision ; Dictionaries ; Image Processing and Computer Vision ; Machine vision ; Modules ; Pixels ; Regularization ; Research Article ; Supervised learning ; User Interfaces and Human Computer Interaction</subject><ispartof>Computational Visual Media, 2022-12, Vol.8 (4), p.597-612</ispartof><rights>The Author(s) 2022</rights><rights>COPYRIGHT 2022 Springer</rights><rights>The Author(s) 2022. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c314t-3ff954e756eb636db6eb910f017e85036d7b5f99d2ad17962ade3e8bf4d98e193</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s41095-021-0259-z$$EPDF$$P50$$Gspringer$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://doi.org/10.1007/s41095-021-0259-z$$EHTML$$P50$$Gspringer$$Hfree_for_read</linktohtml><link.rule.ids>314,777,781,861,27905,27906,41101,42170,51557</link.rule.ids></links><search><creatorcontrib>Wu, Heng</creatorcontrib><creatorcontrib>Fu, Kui</creatorcontrib><creatorcontrib>Zhao, Yifan</creatorcontrib><creatorcontrib>Song, Haokun</creatorcontrib><creatorcontrib>Li, Jia</creatorcontrib><title>Joint self-supervised and reference-guided learning for depth inpainting</title><title>Computational Visual Media</title><addtitle>Comp. Visual Media</addtitle><description>Depth information can benefit various computer vision tasks on both images and videos. However, depth maps may suffer from invalid values in many pixels, and also large holes. To improve such data, we propose a joint self-supervised and reference-guided learning approach for depth inpainting. For the self-supervised learning strategy, we introduce an improved spatial convolutional sparse coding module in which total variation regularization is employed to enhance the structural information while preserving edge information. This module alternately learns a convolutional dictionary and sparse coding from a corrupted depth map. Then, both the learned convolutional dictionary and sparse coding are convolved to yield an initial depth map, which is effectively smoothed using local contextual information. The reference-guided learning part is inspired by the fact that adjacent pixels with close colors in the RGB image tend to have similar depth values. We thus construct a hierarchical joint bilateral filter module using the corresponding color image to fill in large holes. In summary, our approach integrates a convolutional sparse coding module to preserve local contextual information and a hierarchical joint bilateral filter module for filling using specific adjacent information. Experimental results show that the proposed approach works well for both invalid value restoration and large hole inpainting.</description><subject>Artificial Intelligence</subject><subject>Coding</subject><subject>Color imagery</subject><subject>Computer Graphics</subject><subject>Computer Science</subject><subject>Computer vision</subject><subject>Dictionaries</subject><subject>Image Processing and Computer Vision</subject><subject>Machine vision</subject><subject>Modules</subject><subject>Pixels</subject><subject>Regularization</subject><subject>Research Article</subject><subject>Supervised learning</subject><subject>User Interfaces and Human Computer Interaction</subject><issn>2096-0433</issn><issn>2096-0662</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>C6C</sourceid><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNp1kF9LwzAUxYsoOOY-gG8Fnzvzp0mbxzHUKQNf9Dm0zU2NdGlNWsF9eu-o4pOEcMPh_JKTkyTXlKwpIcVtzClRIiOM4hYqO54lC0aUzIiU7Pz3nHN-maxidDVhkpeEFmqR7J5658c0QmezOA0QPl0Ek1bepAEsBPANZO3kDIodVME736a2D6mBYXxLnR8q5FG8Si5s1UVY_cxl8np_97LdZfvnh8ftZp81nOZjxq1VIodCSKgll6bGqSixmAZKQVApamGVMqwyGFDiAA5lbXOjSqCKL5Ob-d4h9B8TxFG_91Pw-KRmshSMiZIU6FrPrrbqQDtv-zFUDS4DB9f0HqxDfVNwyguSixwBOgNN6GPEr-shuEMVvjQl-lSynkvWWLI-layPyLCZiej1LYS_KP9D31K6f3E</recordid><startdate>20221201</startdate><enddate>20221201</enddate><creator>Wu, Heng</creator><creator>Fu, Kui</creator><creator>Zhao, Yifan</creator><creator>Song, Haokun</creator><creator>Li, Jia</creator><general>Tsinghua University Press</general><general>Springer</general><general>Springer Nature B.V</general><scope>C6C</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>IAO</scope><scope>7SC</scope><scope>8FD</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope></search><sort><creationdate>20221201</creationdate><title>Joint self-supervised and reference-guided learning for depth inpainting</title><author>Wu, Heng ; Fu, Kui ; Zhao, Yifan ; Song, Haokun ; Li, Jia</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c314t-3ff954e756eb636db6eb910f017e85036d7b5f99d2ad17962ade3e8bf4d98e193</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Artificial Intelligence</topic><topic>Coding</topic><topic>Color imagery</topic><topic>Computer Graphics</topic><topic>Computer Science</topic><topic>Computer vision</topic><topic>Dictionaries</topic><topic>Image Processing and Computer Vision</topic><topic>Machine vision</topic><topic>Modules</topic><topic>Pixels</topic><topic>Regularization</topic><topic>Research Article</topic><topic>Supervised learning</topic><topic>User Interfaces and Human Computer Interaction</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Wu, Heng</creatorcontrib><creatorcontrib>Fu, Kui</creatorcontrib><creatorcontrib>Zhao, Yifan</creatorcontrib><creatorcontrib>Song, Haokun</creatorcontrib><creatorcontrib>Li, Jia</creatorcontrib><collection>Springer Nature OA Free Journals</collection><collection>CrossRef</collection><collection>Gale Academic OneFile</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><jtitle>Computational Visual Media</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Wu, Heng</au><au>Fu, Kui</au><au>Zhao, Yifan</au><au>Song, Haokun</au><au>Li, Jia</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Joint self-supervised and reference-guided learning for depth inpainting</atitle><jtitle>Computational Visual Media</jtitle><stitle>Comp. Visual Media</stitle><date>2022-12-01</date><risdate>2022</risdate><volume>8</volume><issue>4</issue><spage>597</spage><epage>612</epage><pages>597-612</pages><issn>2096-0433</issn><eissn>2096-0662</eissn><abstract>Depth information can benefit various computer vision tasks on both images and videos. However, depth maps may suffer from invalid values in many pixels, and also large holes. To improve such data, we propose a joint self-supervised and reference-guided learning approach for depth inpainting. For the self-supervised learning strategy, we introduce an improved spatial convolutional sparse coding module in which total variation regularization is employed to enhance the structural information while preserving edge information. This module alternately learns a convolutional dictionary and sparse coding from a corrupted depth map. Then, both the learned convolutional dictionary and sparse coding are convolved to yield an initial depth map, which is effectively smoothed using local contextual information. The reference-guided learning part is inspired by the fact that adjacent pixels with close colors in the RGB image tend to have similar depth values. We thus construct a hierarchical joint bilateral filter module using the corresponding color image to fill in large holes. In summary, our approach integrates a convolutional sparse coding module to preserve local contextual information and a hierarchical joint bilateral filter module for filling using specific adjacent information. Experimental results show that the proposed approach works well for both invalid value restoration and large hole inpainting.</abstract><cop>Beijing</cop><pub>Tsinghua University Press</pub><doi>10.1007/s41095-021-0259-z</doi><tpages>16</tpages><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2096-0433 |
ispartof | Computational Visual Media, 2022-12, Vol.8 (4), p.597-612 |
issn | 2096-0433 2096-0662 |
language | eng |
recordid | cdi_proquest_journals_2685225807 |
source | DOAJ Directory of Open Access Journals; Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals; Springer Nature OA Free Journals |
subjects | Artificial Intelligence Coding Color imagery Computer Graphics Computer Science Computer vision Dictionaries Image Processing and Computer Vision Machine vision Modules Pixels Regularization Research Article Supervised learning User Interfaces and Human Computer Interaction |
title | Joint self-supervised and reference-guided learning for depth inpainting |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-19T16%3A02%3A19IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_proqu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Joint%20self-supervised%20and%20reference-guided%20learning%20for%20depth%20inpainting&rft.jtitle=Computational%20Visual%20Media&rft.au=Wu,%20Heng&rft.date=2022-12-01&rft.volume=8&rft.issue=4&rft.spage=597&rft.epage=612&rft.pages=597-612&rft.issn=2096-0433&rft.eissn=2096-0662&rft_id=info:doi/10.1007/s41095-021-0259-z&rft_dat=%3Cgale_proqu%3EA731370454%3C/gale_proqu%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2685225807&rft_id=info:pmid/&rft_galeid=A731370454&rfr_iscdi=true |