Refining Geometry from Depth Sensors using IR Shading Images
We propose a method to refine geometry of 3D meshes from a consumer level depth camera, e.g. Kinect, by exploiting shading cues captured from an infrared (IR) camera. A major benefit to using an IR camera instead of an RGB camera is that the IR images captured are narrow band images that filter out...
Gespeichert in:
Veröffentlicht in: | International journal of computer vision 2017-03, Vol.122 (1), p.1-16 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 16 |
---|---|
container_issue | 1 |
container_start_page | 1 |
container_title | International journal of computer vision |
container_volume | 122 |
creator | Choe, Gyeongmin Park, Jaesik Tai, Yu-Wing Kweon, In So |
description | We propose a method to refine geometry of 3D meshes from a consumer level depth camera, e.g. Kinect, by exploiting shading cues captured from an infrared (IR) camera. A major benefit to using an IR camera instead of an RGB camera is that the IR images captured are narrow band images that filter out most undesired ambient light, which makes our system robust against natural indoor illumination. Moreover, for many natural objects with colorful textures in the visible spectrum, the subjects appear to have a uniform albedo in the IR spectrum. Based on our analyses on the IR projector light of the Kinect, we define a near light source IR shading model that describes the captured intensity as a function of surface normals, albedo, lighting direction, and distance between light source and surface points. To resolve the ambiguity in our model between the normals and distances, we utilize an initial 3D mesh from the Kinect fusion and multi-view information to reliably estimate surface details that were not captured and reconstructed by the Kinect fusion. Our approach directly operates on the mesh model for geometry refinement. We ran experiments on our algorithm for geometries captured by both the Kinect I and Kinect II, as the depth acquisition in Kinect I is based on a structured-light technique and that of the Kinect II is based on a time-of-flight technology. The effectiveness of our approach is demonstrated through several challenging real-world examples. We have also performed a user study to evaluate the quality of the mesh models before and after our refinements. |
doi_str_mv | 10.1007/s11263-016-0937-y |
format | Article |
fullrecord | <record><control><sourceid>gale_proqu</sourceid><recordid>TN_cdi_proquest_miscellaneous_1884106987</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A551029048</galeid><sourcerecordid>A551029048</sourcerecordid><originalsourceid>FETCH-LOGICAL-c422t-47bdbc45432ac0c55fdcb4717605866132a08072c080fb732ac15ea6718c8d3f3</originalsourceid><addsrcrecordid>eNp1kV1r2zAUhkVZoVnbH9A7w262C3fnyPoy7CZkXRsIFJL2Wiiy7LrEVibZsPz7KnMvlsIQSIfD8xwOegm5QbhFAPk9IlJR5IAih7KQ-eGMzJDLIkcG_BOZQUkh56LEC_I5xlcAoIoWM_Jj7eq2b_smu3e-c0M4ZHXwXfbT7YeXbOP66EPMxngkluts82Kqv2VnGhevyHltdtFdv7-X5PnX3dPiIV893i8X81VuGaVDzuS22lrGWUGNBct5XdktkygFcCUEpjYokNSmu97KI4XcGSFRWVUVdXFJvk5z98H_Hl0cdNdG63Y70zs_Ro1KMQRRKpnQLx_QVz-GPm2XKKE4Q0V5om4nqjE7p9u-9kMwNp3Kda31ffqT1J9zjkBLYCoJ306ExAzuz9CYMUa93KxPWZxYG3yMwdV6H9rOhING0Mew9BSWTmHpY1j6kBw6OTGxfePCP2v_V3oDWk6Twg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1868541825</pqid></control><display><type>article</type><title>Refining Geometry from Depth Sensors using IR Shading Images</title><source>SpringerLink</source><creator>Choe, Gyeongmin ; Park, Jaesik ; Tai, Yu-Wing ; Kweon, In So</creator><creatorcontrib>Choe, Gyeongmin ; Park, Jaesik ; Tai, Yu-Wing ; Kweon, In So</creatorcontrib><description>We propose a method to refine geometry of 3D meshes from a consumer level depth camera, e.g. Kinect, by exploiting shading cues captured from an infrared (IR) camera. A major benefit to using an IR camera instead of an RGB camera is that the IR images captured are narrow band images that filter out most undesired ambient light, which makes our system robust against natural indoor illumination. Moreover, for many natural objects with colorful textures in the visible spectrum, the subjects appear to have a uniform albedo in the IR spectrum. Based on our analyses on the IR projector light of the Kinect, we define a near light source IR shading model that describes the captured intensity as a function of surface normals, albedo, lighting direction, and distance between light source and surface points. To resolve the ambiguity in our model between the normals and distances, we utilize an initial 3D mesh from the Kinect fusion and multi-view information to reliably estimate surface details that were not captured and reconstructed by the Kinect fusion. Our approach directly operates on the mesh model for geometry refinement. We ran experiments on our algorithm for geometries captured by both the Kinect I and Kinect II, as the depth acquisition in Kinect I is based on a structured-light technique and that of the Kinect II is based on a time-of-flight technology. The effectiveness of our approach is demonstrated through several challenging real-world examples. We have also performed a user study to evaluate the quality of the mesh models before and after our refinements.</description><identifier>ISSN: 0920-5691</identifier><identifier>EISSN: 1573-1405</identifier><identifier>DOI: 10.1007/s11263-016-0937-y</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>3-D graphics ; Albedo ; Analysis ; Artificial Intelligence ; Cameras ; Computer Imaging ; Computer Science ; Finite element method ; Geometry ; Illumination ; Image Processing and Computer Vision ; Infrared cameras ; Infrared imaging systems ; Light ; Mathematical models ; Pattern Recognition ; Pattern Recognition and Graphics ; Projectors ; Sensors ; Shading ; Studies ; Texture ; Vision ; Vision systems</subject><ispartof>International journal of computer vision, 2017-03, Vol.122 (1), p.1-16</ispartof><rights>Springer Science+Business Media New York 2016</rights><rights>COPYRIGHT 2017 Springer</rights><rights>International Journal of Computer Vision is a copyright of Springer, 2017.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c422t-47bdbc45432ac0c55fdcb4717605866132a08072c080fb732ac15ea6718c8d3f3</citedby><cites>FETCH-LOGICAL-c422t-47bdbc45432ac0c55fdcb4717605866132a08072c080fb732ac15ea6718c8d3f3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11263-016-0937-y$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11263-016-0937-y$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,780,784,27923,27924,41487,42556,51318</link.rule.ids></links><search><creatorcontrib>Choe, Gyeongmin</creatorcontrib><creatorcontrib>Park, Jaesik</creatorcontrib><creatorcontrib>Tai, Yu-Wing</creatorcontrib><creatorcontrib>Kweon, In So</creatorcontrib><title>Refining Geometry from Depth Sensors using IR Shading Images</title><title>International journal of computer vision</title><addtitle>Int J Comput Vis</addtitle><description>We propose a method to refine geometry of 3D meshes from a consumer level depth camera, e.g. Kinect, by exploiting shading cues captured from an infrared (IR) camera. A major benefit to using an IR camera instead of an RGB camera is that the IR images captured are narrow band images that filter out most undesired ambient light, which makes our system robust against natural indoor illumination. Moreover, for many natural objects with colorful textures in the visible spectrum, the subjects appear to have a uniform albedo in the IR spectrum. Based on our analyses on the IR projector light of the Kinect, we define a near light source IR shading model that describes the captured intensity as a function of surface normals, albedo, lighting direction, and distance between light source and surface points. To resolve the ambiguity in our model between the normals and distances, we utilize an initial 3D mesh from the Kinect fusion and multi-view information to reliably estimate surface details that were not captured and reconstructed by the Kinect fusion. Our approach directly operates on the mesh model for geometry refinement. We ran experiments on our algorithm for geometries captured by both the Kinect I and Kinect II, as the depth acquisition in Kinect I is based on a structured-light technique and that of the Kinect II is based on a time-of-flight technology. The effectiveness of our approach is demonstrated through several challenging real-world examples. We have also performed a user study to evaluate the quality of the mesh models before and after our refinements.</description><subject>3-D graphics</subject><subject>Albedo</subject><subject>Analysis</subject><subject>Artificial Intelligence</subject><subject>Cameras</subject><subject>Computer Imaging</subject><subject>Computer Science</subject><subject>Finite element method</subject><subject>Geometry</subject><subject>Illumination</subject><subject>Image Processing and Computer Vision</subject><subject>Infrared cameras</subject><subject>Infrared imaging systems</subject><subject>Light</subject><subject>Mathematical models</subject><subject>Pattern Recognition</subject><subject>Pattern Recognition and Graphics</subject><subject>Projectors</subject><subject>Sensors</subject><subject>Shading</subject><subject>Studies</subject><subject>Texture</subject><subject>Vision</subject><subject>Vision systems</subject><issn>0920-5691</issn><issn>1573-1405</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2017</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><recordid>eNp1kV1r2zAUhkVZoVnbH9A7w262C3fnyPoy7CZkXRsIFJL2Wiiy7LrEVibZsPz7KnMvlsIQSIfD8xwOegm5QbhFAPk9IlJR5IAih7KQ-eGMzJDLIkcG_BOZQUkh56LEC_I5xlcAoIoWM_Jj7eq2b_smu3e-c0M4ZHXwXfbT7YeXbOP66EPMxngkluts82Kqv2VnGhevyHltdtFdv7-X5PnX3dPiIV893i8X81VuGaVDzuS22lrGWUGNBct5XdktkygFcCUEpjYokNSmu97KI4XcGSFRWVUVdXFJvk5z98H_Hl0cdNdG63Y70zs_Ro1KMQRRKpnQLx_QVz-GPm2XKKE4Q0V5om4nqjE7p9u-9kMwNp3Kda31ffqT1J9zjkBLYCoJ306ExAzuz9CYMUa93KxPWZxYG3yMwdV6H9rOhING0Mew9BSWTmHpY1j6kBw6OTGxfePCP2v_V3oDWk6Twg</recordid><startdate>20170301</startdate><enddate>20170301</enddate><creator>Choe, Gyeongmin</creator><creator>Park, Jaesik</creator><creator>Tai, Yu-Wing</creator><creator>Kweon, In So</creator><general>Springer US</general><general>Springer</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>ISR</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PYYUZ</scope><scope>Q9U</scope></search><sort><creationdate>20170301</creationdate><title>Refining Geometry from Depth Sensors using IR Shading Images</title><author>Choe, Gyeongmin ; Park, Jaesik ; Tai, Yu-Wing ; Kweon, In So</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c422t-47bdbc45432ac0c55fdcb4717605866132a08072c080fb732ac15ea6718c8d3f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2017</creationdate><topic>3-D graphics</topic><topic>Albedo</topic><topic>Analysis</topic><topic>Artificial Intelligence</topic><topic>Cameras</topic><topic>Computer Imaging</topic><topic>Computer Science</topic><topic>Finite element method</topic><topic>Geometry</topic><topic>Illumination</topic><topic>Image Processing and Computer Vision</topic><topic>Infrared cameras</topic><topic>Infrared imaging systems</topic><topic>Light</topic><topic>Mathematical models</topic><topic>Pattern Recognition</topic><topic>Pattern Recognition and Graphics</topic><topic>Projectors</topic><topic>Sensors</topic><topic>Shading</topic><topic>Studies</topic><topic>Texture</topic><topic>Vision</topic><topic>Vision systems</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Choe, Gyeongmin</creatorcontrib><creatorcontrib>Park, Jaesik</creatorcontrib><creatorcontrib>Tai, Yu-Wing</creatorcontrib><creatorcontrib>Kweon, In So</creatorcontrib><collection>CrossRef</collection><collection>Gale In Context: Science</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection</collection><collection>Computing Database (Alumni Edition)</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>ProQuest One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ABI/INFORM Collection China</collection><collection>ProQuest Central Basic</collection><jtitle>International journal of computer vision</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Choe, Gyeongmin</au><au>Park, Jaesik</au><au>Tai, Yu-Wing</au><au>Kweon, In So</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Refining Geometry from Depth Sensors using IR Shading Images</atitle><jtitle>International journal of computer vision</jtitle><stitle>Int J Comput Vis</stitle><date>2017-03-01</date><risdate>2017</risdate><volume>122</volume><issue>1</issue><spage>1</spage><epage>16</epage><pages>1-16</pages><issn>0920-5691</issn><eissn>1573-1405</eissn><abstract>We propose a method to refine geometry of 3D meshes from a consumer level depth camera, e.g. Kinect, by exploiting shading cues captured from an infrared (IR) camera. A major benefit to using an IR camera instead of an RGB camera is that the IR images captured are narrow band images that filter out most undesired ambient light, which makes our system robust against natural indoor illumination. Moreover, for many natural objects with colorful textures in the visible spectrum, the subjects appear to have a uniform albedo in the IR spectrum. Based on our analyses on the IR projector light of the Kinect, we define a near light source IR shading model that describes the captured intensity as a function of surface normals, albedo, lighting direction, and distance between light source and surface points. To resolve the ambiguity in our model between the normals and distances, we utilize an initial 3D mesh from the Kinect fusion and multi-view information to reliably estimate surface details that were not captured and reconstructed by the Kinect fusion. Our approach directly operates on the mesh model for geometry refinement. We ran experiments on our algorithm for geometries captured by both the Kinect I and Kinect II, as the depth acquisition in Kinect I is based on a structured-light technique and that of the Kinect II is based on a time-of-flight technology. The effectiveness of our approach is demonstrated through several challenging real-world examples. We have also performed a user study to evaluate the quality of the mesh models before and after our refinements.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s11263-016-0937-y</doi><tpages>16</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0920-5691 |
ispartof | International journal of computer vision, 2017-03, Vol.122 (1), p.1-16 |
issn | 0920-5691 1573-1405 |
language | eng |
recordid | cdi_proquest_miscellaneous_1884106987 |
source | SpringerLink |
subjects | 3-D graphics Albedo Analysis Artificial Intelligence Cameras Computer Imaging Computer Science Finite element method Geometry Illumination Image Processing and Computer Vision Infrared cameras Infrared imaging systems Light Mathematical models Pattern Recognition Pattern Recognition and Graphics Projectors Sensors Shading Studies Texture Vision Vision systems |
title | Refining Geometry from Depth Sensors using IR Shading Images |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-10T14%3A31%3A56IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_proqu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Refining%20Geometry%20from%20Depth%20Sensors%20using%20IR%20Shading%20Images&rft.jtitle=International%20journal%20of%20computer%20vision&rft.au=Choe,%20Gyeongmin&rft.date=2017-03-01&rft.volume=122&rft.issue=1&rft.spage=1&rft.epage=16&rft.pages=1-16&rft.issn=0920-5691&rft.eissn=1573-1405&rft_id=info:doi/10.1007/s11263-016-0937-y&rft_dat=%3Cgale_proqu%3EA551029048%3C/gale_proqu%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1868541825&rft_id=info:pmid/&rft_galeid=A551029048&rfr_iscdi=true |