Depth completion for kinect v2 sensor

Kinect v2 adopts a time-of-flight (ToF) depth sensing mechanism, which causes different type of depth artifacts comparing to the original Kinect v1. The goal of this paper is to propose a depth completion method, which is designed especially for the Kinect v2 depth artifacts. Observing the specific...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Multimedia tools and applications 2017-02, Vol.76 (3), p.4357-4380
Hauptverfasser: Song, Wanbin, Le, Anh Vu, Yun, Seokmin, Jung, Seung-Won, Won, Chee Sun
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 4380
container_issue 3
container_start_page 4357
container_title Multimedia tools and applications
container_volume 76
creator Song, Wanbin
Le, Anh Vu
Yun, Seokmin
Jung, Seung-Won
Won, Chee Sun
description Kinect v2 adopts a time-of-flight (ToF) depth sensing mechanism, which causes different type of depth artifacts comparing to the original Kinect v1. The goal of this paper is to propose a depth completion method, which is designed especially for the Kinect v2 depth artifacts. Observing the specific types of depth errors in the Kinect v2 such as thin hole-lines along the object boundaries and the new type of holes in the image corners, in this paper, we exploit the position information of the color edges extracted from the Kinect v2 sensor to guide the accurate hole-filling around the object boundaries. Since our approach requires a precise registration between color and depth images, we also introduce the transformation matrix which yields point-to-point correspondence with a pixel-accuracy. Experimental results demonstrate the effectiveness of the proposed depth image completion algorithm for the Kinect v2 in terms of completion accuracy and execution time.
doi_str_mv 10.1007/s11042-016-3523-y
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_1884112825</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1884112825</sourcerecordid><originalsourceid>FETCH-LOGICAL-c349t-ba74b0fe7f867fb68d311d3bf3cf082211d8c5e0281a8e22b53fe0ee951181163</originalsourceid><addsrcrecordid>eNp1kE1LxDAQhoMouK7-AG8FEbxEM5OmyR5lXT9gwYueQ9udaNduU5OusP_eLPUggqeZged9GR7GzkFcgxD6JgKIHLmAgkuFku8O2ASUllxrhMO0SyO4VgKO2UmMa5FAhfmEXd5RP7xntd_0LQ2N7zLnQ_bRdFQP2Rdmkbrowyk7cmUb6exnTtnr_eJl_siXzw9P89slr2U-G3hV6rwSjrQzhXZVYVYSYCUrJ2snDGI6TK1IoIHSEGKlpCNBNFMABqCQU3Y19vbBf24pDnbTxJratuzIb6MFY3IANKgSevEHXftt6NJ3idKgC4kyTxSMVB18jIGc7UOzKcPOgrB7cXYUZ5MPuxdndymDYyYmtnuj8Kv539A3n0RuWw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1871763234</pqid></control><display><type>article</type><title>Depth completion for kinect v2 sensor</title><source>SpringerNature Journals</source><creator>Song, Wanbin ; Le, Anh Vu ; Yun, Seokmin ; Jung, Seung-Won ; Won, Chee Sun</creator><creatorcontrib>Song, Wanbin ; Le, Anh Vu ; Yun, Seokmin ; Jung, Seung-Won ; Won, Chee Sun</creatorcontrib><description>Kinect v2 adopts a time-of-flight (ToF) depth sensing mechanism, which causes different type of depth artifacts comparing to the original Kinect v1. The goal of this paper is to propose a depth completion method, which is designed especially for the Kinect v2 depth artifacts. Observing the specific types of depth errors in the Kinect v2 such as thin hole-lines along the object boundaries and the new type of holes in the image corners, in this paper, we exploit the position information of the color edges extracted from the Kinect v2 sensor to guide the accurate hole-filling around the object boundaries. Since our approach requires a precise registration between color and depth images, we also introduce the transformation matrix which yields point-to-point correspondence with a pixel-accuracy. Experimental results demonstrate the effectiveness of the proposed depth image completion algorithm for the Kinect v2 in terms of completion accuracy and execution time.</description><identifier>ISSN: 1380-7501</identifier><identifier>EISSN: 1573-7721</identifier><identifier>DOI: 10.1007/s11042-016-3523-y</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Accuracy ; Boundaries ; Color ; Computer Communication Networks ; Computer Science ; Corners ; Data Structures and Information Theory ; Image processing systems ; Multimedia Information Systems ; Object recognition ; Registration ; Sensors ; Special Purpose and Application-Based Systems ; Studies ; Transformations</subject><ispartof>Multimedia tools and applications, 2017-02, Vol.76 (3), p.4357-4380</ispartof><rights>Springer Science+Business Media New York 2016</rights><rights>Multimedia Tools and Applications is a copyright of Springer, 2017.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c349t-ba74b0fe7f867fb68d311d3bf3cf082211d8c5e0281a8e22b53fe0ee951181163</citedby><cites>FETCH-LOGICAL-c349t-ba74b0fe7f867fb68d311d3bf3cf082211d8c5e0281a8e22b53fe0ee951181163</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11042-016-3523-y$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11042-016-3523-y$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,41488,42557,51319</link.rule.ids></links><search><creatorcontrib>Song, Wanbin</creatorcontrib><creatorcontrib>Le, Anh Vu</creatorcontrib><creatorcontrib>Yun, Seokmin</creatorcontrib><creatorcontrib>Jung, Seung-Won</creatorcontrib><creatorcontrib>Won, Chee Sun</creatorcontrib><title>Depth completion for kinect v2 sensor</title><title>Multimedia tools and applications</title><addtitle>Multimed Tools Appl</addtitle><description>Kinect v2 adopts a time-of-flight (ToF) depth sensing mechanism, which causes different type of depth artifacts comparing to the original Kinect v1. The goal of this paper is to propose a depth completion method, which is designed especially for the Kinect v2 depth artifacts. Observing the specific types of depth errors in the Kinect v2 such as thin hole-lines along the object boundaries and the new type of holes in the image corners, in this paper, we exploit the position information of the color edges extracted from the Kinect v2 sensor to guide the accurate hole-filling around the object boundaries. Since our approach requires a precise registration between color and depth images, we also introduce the transformation matrix which yields point-to-point correspondence with a pixel-accuracy. Experimental results demonstrate the effectiveness of the proposed depth image completion algorithm for the Kinect v2 in terms of completion accuracy and execution time.</description><subject>Accuracy</subject><subject>Boundaries</subject><subject>Color</subject><subject>Computer Communication Networks</subject><subject>Computer Science</subject><subject>Corners</subject><subject>Data Structures and Information Theory</subject><subject>Image processing systems</subject><subject>Multimedia Information Systems</subject><subject>Object recognition</subject><subject>Registration</subject><subject>Sensors</subject><subject>Special Purpose and Application-Based Systems</subject><subject>Studies</subject><subject>Transformations</subject><issn>1380-7501</issn><issn>1573-7721</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2017</creationdate><recordtype>article</recordtype><sourceid>8G5</sourceid><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><sourceid>GUQSH</sourceid><sourceid>M2O</sourceid><recordid>eNp1kE1LxDAQhoMouK7-AG8FEbxEM5OmyR5lXT9gwYueQ9udaNduU5OusP_eLPUggqeZged9GR7GzkFcgxD6JgKIHLmAgkuFku8O2ASUllxrhMO0SyO4VgKO2UmMa5FAhfmEXd5RP7xntd_0LQ2N7zLnQ_bRdFQP2Rdmkbrowyk7cmUb6exnTtnr_eJl_siXzw9P89slr2U-G3hV6rwSjrQzhXZVYVYSYCUrJ2snDGI6TK1IoIHSEGKlpCNBNFMABqCQU3Y19vbBf24pDnbTxJratuzIb6MFY3IANKgSevEHXftt6NJ3idKgC4kyTxSMVB18jIGc7UOzKcPOgrB7cXYUZ5MPuxdndymDYyYmtnuj8Kv539A3n0RuWw</recordid><startdate>20170201</startdate><enddate>20170201</enddate><creator>Song, Wanbin</creator><creator>Le, Anh Vu</creator><creator>Yun, Seokmin</creator><creator>Jung, Seung-Won</creator><creator>Won, Chee Sun</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8AO</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>8G5</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>GUQSH</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>M2O</scope><scope>MBDVC</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>Q9U</scope></search><sort><creationdate>20170201</creationdate><title>Depth completion for kinect v2 sensor</title><author>Song, Wanbin ; Le, Anh Vu ; Yun, Seokmin ; Jung, Seung-Won ; Won, Chee Sun</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c349t-ba74b0fe7f867fb68d311d3bf3cf082211d8c5e0281a8e22b53fe0ee951181163</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2017</creationdate><topic>Accuracy</topic><topic>Boundaries</topic><topic>Color</topic><topic>Computer Communication Networks</topic><topic>Computer Science</topic><topic>Corners</topic><topic>Data Structures and Information Theory</topic><topic>Image processing systems</topic><topic>Multimedia Information Systems</topic><topic>Object recognition</topic><topic>Registration</topic><topic>Sensors</topic><topic>Special Purpose and Application-Based Systems</topic><topic>Studies</topic><topic>Transformations</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Song, Wanbin</creatorcontrib><creatorcontrib>Le, Anh Vu</creatorcontrib><creatorcontrib>Yun, Seokmin</creatorcontrib><creatorcontrib>Jung, Seung-Won</creatorcontrib><creatorcontrib>Won, Chee Sun</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>Access via ABI/INFORM (ProQuest)</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>ProQuest Pharma Collection</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>Research Library (Alumni Edition)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>Research Library Prep</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>Research Library</collection><collection>Research Library (Corporate)</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central Basic</collection><jtitle>Multimedia tools and applications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Song, Wanbin</au><au>Le, Anh Vu</au><au>Yun, Seokmin</au><au>Jung, Seung-Won</au><au>Won, Chee Sun</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Depth completion for kinect v2 sensor</atitle><jtitle>Multimedia tools and applications</jtitle><stitle>Multimed Tools Appl</stitle><date>2017-02-01</date><risdate>2017</risdate><volume>76</volume><issue>3</issue><spage>4357</spage><epage>4380</epage><pages>4357-4380</pages><issn>1380-7501</issn><eissn>1573-7721</eissn><abstract>Kinect v2 adopts a time-of-flight (ToF) depth sensing mechanism, which causes different type of depth artifacts comparing to the original Kinect v1. The goal of this paper is to propose a depth completion method, which is designed especially for the Kinect v2 depth artifacts. Observing the specific types of depth errors in the Kinect v2 such as thin hole-lines along the object boundaries and the new type of holes in the image corners, in this paper, we exploit the position information of the color edges extracted from the Kinect v2 sensor to guide the accurate hole-filling around the object boundaries. Since our approach requires a precise registration between color and depth images, we also introduce the transformation matrix which yields point-to-point correspondence with a pixel-accuracy. Experimental results demonstrate the effectiveness of the proposed depth image completion algorithm for the Kinect v2 in terms of completion accuracy and execution time.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s11042-016-3523-y</doi><tpages>24</tpages></addata></record>
fulltext fulltext
identifier ISSN: 1380-7501
ispartof Multimedia tools and applications, 2017-02, Vol.76 (3), p.4357-4380
issn 1380-7501
1573-7721
language eng
recordid cdi_proquest_miscellaneous_1884112825
source SpringerNature Journals
subjects Accuracy
Boundaries
Color
Computer Communication Networks
Computer Science
Corners
Data Structures and Information Theory
Image processing systems
Multimedia Information Systems
Object recognition
Registration
Sensors
Special Purpose and Application-Based Systems
Studies
Transformations
title Depth completion for kinect v2 sensor
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-23T00%3A19%3A18IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Depth%20completion%20for%20kinect%20v2%20sensor&rft.jtitle=Multimedia%20tools%20and%20applications&rft.au=Song,%20Wanbin&rft.date=2017-02-01&rft.volume=76&rft.issue=3&rft.spage=4357&rft.epage=4380&rft.pages=4357-4380&rft.issn=1380-7501&rft.eissn=1573-7721&rft_id=info:doi/10.1007/s11042-016-3523-y&rft_dat=%3Cproquest_cross%3E1884112825%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1871763234&rft_id=info:pmid/&rfr_iscdi=true