A Real-Time Online Learning Framework for Joint 3D Reconstruction and Semantic Segmentation of Indoor Scenes

This paper presents a real-time online vision framework to jointly recover an indoor scene's 3D structure and semantic label. Given noisy depth maps, a camera trajectory, and 2D semantic labels at train time, the proposed deep neural network based approach learns to fuse the depth over frames w...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Menini, Davide, Kumar, Suryansh, Oswald, Martin R, Sandstrom, Erik, Sminchisescu, Cristian, Van Gool, Luc
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Menini, Davide
Kumar, Suryansh
Oswald, Martin R
Sandstrom, Erik
Sminchisescu, Cristian
Van Gool, Luc
description This paper presents a real-time online vision framework to jointly recover an indoor scene's 3D structure and semantic label. Given noisy depth maps, a camera trajectory, and 2D semantic labels at train time, the proposed deep neural network based approach learns to fuse the depth over frames with suitable semantic labels in the scene space. Our approach exploits the joint volumetric representation of the depth and semantics in the scene feature space to solve this task. For a compelling online fusion of the semantic labels and geometry in real-time, we introduce an efficient vortex pooling block while dropping the use of routing network in online depth fusion to preserve high-frequency surface details. We show that the context information provided by the semantics of the scene helps the depth fusion network learn noise-resistant features. Not only that, it helps overcome the shortcomings of the current online depth fusion method in dealing with thin object structures, thickening artifacts, and false surfaces. Experimental evaluation on the Replica dataset shows that our approach can perform depth fusion at 37 and 10 frames per second with an average reconstruction F-score of 88% and 91%, respectively, depending on the depth map resolution. Moreover, our model shows an average IoU score of 0.515 on the ScanNet 3D semantic benchmark leaderboard.
doi_str_mv 10.48550/arxiv.2108.05246
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2108_05246</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2108_05246</sourcerecordid><originalsourceid>FETCH-LOGICAL-a676-72a8b1b0cc7783d4e52a047218a464280648c1cb14056087001d766e6ff14f1e3</originalsourceid><addsrcrecordid>eNotj81OhDAUhdm4MKMP4Mq-ANiW0nY7GR0dQzKJw55cyu2kEW5Nwb-3F9HVOcnJd5Ivy24EL5StKn4H6St8FFJwW_BKKn2ZDVv2gjDkTRiRHWkIhKxGSBTozPYJRvyM6ZX5mNhzDDSz8n4BXKRpTu9uDpEYUM9OOALNwS3lPCLNsC7RswP1cWFPDgmnq-zCwzDh9X9usmb_0Oye8vr4eNht6xy00bmRYDvRceeMsWWvsJLAlZHCgtJKWq6VdcJ1QvFKc2s4F73RGrX3QnmB5Sa7_btdddu3FEZI3-2vdrtqlz9IilHd</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>A Real-Time Online Learning Framework for Joint 3D Reconstruction and Semantic Segmentation of Indoor Scenes</title><source>arXiv.org</source><creator>Menini, Davide ; Kumar, Suryansh ; Oswald, Martin R ; Sandstrom, Erik ; Sminchisescu, Cristian ; Van Gool, Luc</creator><creatorcontrib>Menini, Davide ; Kumar, Suryansh ; Oswald, Martin R ; Sandstrom, Erik ; Sminchisescu, Cristian ; Van Gool, Luc</creatorcontrib><description>This paper presents a real-time online vision framework to jointly recover an indoor scene's 3D structure and semantic label. Given noisy depth maps, a camera trajectory, and 2D semantic labels at train time, the proposed deep neural network based approach learns to fuse the depth over frames with suitable semantic labels in the scene space. Our approach exploits the joint volumetric representation of the depth and semantics in the scene feature space to solve this task. For a compelling online fusion of the semantic labels and geometry in real-time, we introduce an efficient vortex pooling block while dropping the use of routing network in online depth fusion to preserve high-frequency surface details. We show that the context information provided by the semantics of the scene helps the depth fusion network learn noise-resistant features. Not only that, it helps overcome the shortcomings of the current online depth fusion method in dealing with thin object structures, thickening artifacts, and false surfaces. Experimental evaluation on the Replica dataset shows that our approach can perform depth fusion at 37 and 10 frames per second with an average reconstruction F-score of 88% and 91%, respectively, depending on the depth map resolution. Moreover, our model shows an average IoU score of 0.515 on the ScanNet 3D semantic benchmark leaderboard.</description><identifier>DOI: 10.48550/arxiv.2108.05246</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2021-08</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2108.05246$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2108.05246$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Menini, Davide</creatorcontrib><creatorcontrib>Kumar, Suryansh</creatorcontrib><creatorcontrib>Oswald, Martin R</creatorcontrib><creatorcontrib>Sandstrom, Erik</creatorcontrib><creatorcontrib>Sminchisescu, Cristian</creatorcontrib><creatorcontrib>Van Gool, Luc</creatorcontrib><title>A Real-Time Online Learning Framework for Joint 3D Reconstruction and Semantic Segmentation of Indoor Scenes</title><description>This paper presents a real-time online vision framework to jointly recover an indoor scene's 3D structure and semantic label. Given noisy depth maps, a camera trajectory, and 2D semantic labels at train time, the proposed deep neural network based approach learns to fuse the depth over frames with suitable semantic labels in the scene space. Our approach exploits the joint volumetric representation of the depth and semantics in the scene feature space to solve this task. For a compelling online fusion of the semantic labels and geometry in real-time, we introduce an efficient vortex pooling block while dropping the use of routing network in online depth fusion to preserve high-frequency surface details. We show that the context information provided by the semantics of the scene helps the depth fusion network learn noise-resistant features. Not only that, it helps overcome the shortcomings of the current online depth fusion method in dealing with thin object structures, thickening artifacts, and false surfaces. Experimental evaluation on the Replica dataset shows that our approach can perform depth fusion at 37 and 10 frames per second with an average reconstruction F-score of 88% and 91%, respectively, depending on the depth map resolution. Moreover, our model shows an average IoU score of 0.515 on the ScanNet 3D semantic benchmark leaderboard.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj81OhDAUhdm4MKMP4Mq-ANiW0nY7GR0dQzKJw55cyu2kEW5Nwb-3F9HVOcnJd5Ivy24EL5StKn4H6St8FFJwW_BKKn2ZDVv2gjDkTRiRHWkIhKxGSBTozPYJRvyM6ZX5mNhzDDSz8n4BXKRpTu9uDpEYUM9OOALNwS3lPCLNsC7RswP1cWFPDgmnq-zCwzDh9X9usmb_0Oye8vr4eNht6xy00bmRYDvRceeMsWWvsJLAlZHCgtJKWq6VdcJ1QvFKc2s4F73RGrX3QnmB5Sa7_btdddu3FEZI3-2vdrtqlz9IilHd</recordid><startdate>20210811</startdate><enddate>20210811</enddate><creator>Menini, Davide</creator><creator>Kumar, Suryansh</creator><creator>Oswald, Martin R</creator><creator>Sandstrom, Erik</creator><creator>Sminchisescu, Cristian</creator><creator>Van Gool, Luc</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20210811</creationdate><title>A Real-Time Online Learning Framework for Joint 3D Reconstruction and Semantic Segmentation of Indoor Scenes</title><author>Menini, Davide ; Kumar, Suryansh ; Oswald, Martin R ; Sandstrom, Erik ; Sminchisescu, Cristian ; Van Gool, Luc</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a676-72a8b1b0cc7783d4e52a047218a464280648c1cb14056087001d766e6ff14f1e3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Menini, Davide</creatorcontrib><creatorcontrib>Kumar, Suryansh</creatorcontrib><creatorcontrib>Oswald, Martin R</creatorcontrib><creatorcontrib>Sandstrom, Erik</creatorcontrib><creatorcontrib>Sminchisescu, Cristian</creatorcontrib><creatorcontrib>Van Gool, Luc</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Menini, Davide</au><au>Kumar, Suryansh</au><au>Oswald, Martin R</au><au>Sandstrom, Erik</au><au>Sminchisescu, Cristian</au><au>Van Gool, Luc</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A Real-Time Online Learning Framework for Joint 3D Reconstruction and Semantic Segmentation of Indoor Scenes</atitle><date>2021-08-11</date><risdate>2021</risdate><abstract>This paper presents a real-time online vision framework to jointly recover an indoor scene's 3D structure and semantic label. Given noisy depth maps, a camera trajectory, and 2D semantic labels at train time, the proposed deep neural network based approach learns to fuse the depth over frames with suitable semantic labels in the scene space. Our approach exploits the joint volumetric representation of the depth and semantics in the scene feature space to solve this task. For a compelling online fusion of the semantic labels and geometry in real-time, we introduce an efficient vortex pooling block while dropping the use of routing network in online depth fusion to preserve high-frequency surface details. We show that the context information provided by the semantics of the scene helps the depth fusion network learn noise-resistant features. Not only that, it helps overcome the shortcomings of the current online depth fusion method in dealing with thin object structures, thickening artifacts, and false surfaces. Experimental evaluation on the Replica dataset shows that our approach can perform depth fusion at 37 and 10 frames per second with an average reconstruction F-score of 88% and 91%, respectively, depending on the depth map resolution. Moreover, our model shows an average IoU score of 0.515 on the ScanNet 3D semantic benchmark leaderboard.</abstract><doi>10.48550/arxiv.2108.05246</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2108.05246
ispartof
issn
language eng
recordid cdi_arxiv_primary_2108_05246
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title A Real-Time Online Learning Framework for Joint 3D Reconstruction and Semantic Segmentation of Indoor Scenes
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-23T00%3A52%3A57IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20Real-Time%20Online%20Learning%20Framework%20for%20Joint%203D%20Reconstruction%20and%20Semantic%20Segmentation%20of%20Indoor%20Scenes&rft.au=Menini,%20Davide&rft.date=2021-08-11&rft_id=info:doi/10.48550/arxiv.2108.05246&rft_dat=%3Carxiv_GOX%3E2108_05246%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true