SLAM3R: Real-Time Dense Scene Reconstruction from Monocular RGB Videos

In this paper, we introduce SLAM3R, a novel and effective monocular RGB SLAM system for real-time and high-quality dense 3D reconstruction. SLAM3R provides an end-to-end solution by seamlessly integrating local 3D reconstruction and global coordinate registration through feed-forward neural networks...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Liu, Yuzheng, Dong, Siyan, Wang, Shuzhe, Yang, Yanchao, Fan, Qingnan, Chen, Baoquan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Liu, Yuzheng
Dong, Siyan
Wang, Shuzhe
Yang, Yanchao
Fan, Qingnan
Chen, Baoquan
description In this paper, we introduce SLAM3R, a novel and effective monocular RGB SLAM system for real-time and high-quality dense 3D reconstruction. SLAM3R provides an end-to-end solution by seamlessly integrating local 3D reconstruction and global coordinate registration through feed-forward neural networks. Given an input video, the system first converts it into overlapping clips using a sliding window mechanism. Unlike traditional pose optimization-based methods, SLAM3R directly regresses 3D pointmaps from RGB images in each window and progressively aligns and deforms these local pointmaps to create a globally consistent scene reconstruction - all without explicitly solving any camera parameters. Experiments across datasets consistently show that SLAM3R achieves state-of-the-art reconstruction accuracy and completeness while maintaining real-time performance at 20+ FPS. Code and weights at: https://github.com/PKU-VCL-3DV/SLAM3R.
doi_str_mv 10.48550/arxiv.2412.09401
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2412_09401</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2412_09401</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2412_094013</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjE00jOwNDEw5GRwC_Zx9DUOslIISk3M0Q3JzE1VcEnNK05VCE5OzUsFiibn5xWXFJUml2Tm5ymkFeXnKvjm5-Unl-YkFikEuTsphGWmpOYX8zCwpiXmFKfyQmluBnk31xBnD12wjfEFRZm5iUWV8SCb48E2GxNWAQCWsDdT</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>SLAM3R: Real-Time Dense Scene Reconstruction from Monocular RGB Videos</title><source>arXiv.org</source><creator>Liu, Yuzheng ; Dong, Siyan ; Wang, Shuzhe ; Yang, Yanchao ; Fan, Qingnan ; Chen, Baoquan</creator><creatorcontrib>Liu, Yuzheng ; Dong, Siyan ; Wang, Shuzhe ; Yang, Yanchao ; Fan, Qingnan ; Chen, Baoquan</creatorcontrib><description>In this paper, we introduce SLAM3R, a novel and effective monocular RGB SLAM system for real-time and high-quality dense 3D reconstruction. SLAM3R provides an end-to-end solution by seamlessly integrating local 3D reconstruction and global coordinate registration through feed-forward neural networks. Given an input video, the system first converts it into overlapping clips using a sliding window mechanism. Unlike traditional pose optimization-based methods, SLAM3R directly regresses 3D pointmaps from RGB images in each window and progressively aligns and deforms these local pointmaps to create a globally consistent scene reconstruction - all without explicitly solving any camera parameters. Experiments across datasets consistently show that SLAM3R achieves state-of-the-art reconstruction accuracy and completeness while maintaining real-time performance at 20+ FPS. Code and weights at: https://github.com/PKU-VCL-3DV/SLAM3R.</description><identifier>DOI: 10.48550/arxiv.2412.09401</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2024-12</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2412.09401$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2412.09401$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Liu, Yuzheng</creatorcontrib><creatorcontrib>Dong, Siyan</creatorcontrib><creatorcontrib>Wang, Shuzhe</creatorcontrib><creatorcontrib>Yang, Yanchao</creatorcontrib><creatorcontrib>Fan, Qingnan</creatorcontrib><creatorcontrib>Chen, Baoquan</creatorcontrib><title>SLAM3R: Real-Time Dense Scene Reconstruction from Monocular RGB Videos</title><description>In this paper, we introduce SLAM3R, a novel and effective monocular RGB SLAM system for real-time and high-quality dense 3D reconstruction. SLAM3R provides an end-to-end solution by seamlessly integrating local 3D reconstruction and global coordinate registration through feed-forward neural networks. Given an input video, the system first converts it into overlapping clips using a sliding window mechanism. Unlike traditional pose optimization-based methods, SLAM3R directly regresses 3D pointmaps from RGB images in each window and progressively aligns and deforms these local pointmaps to create a globally consistent scene reconstruction - all without explicitly solving any camera parameters. Experiments across datasets consistently show that SLAM3R achieves state-of-the-art reconstruction accuracy and completeness while maintaining real-time performance at 20+ FPS. Code and weights at: https://github.com/PKU-VCL-3DV/SLAM3R.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjE00jOwNDEw5GRwC_Zx9DUOslIISk3M0Q3JzE1VcEnNK05VCE5OzUsFiibn5xWXFJUml2Tm5ymkFeXnKvjm5-Unl-YkFikEuTsphGWmpOYX8zCwpiXmFKfyQmluBnk31xBnD12wjfEFRZm5iUWV8SCb48E2GxNWAQCWsDdT</recordid><startdate>20241212</startdate><enddate>20241212</enddate><creator>Liu, Yuzheng</creator><creator>Dong, Siyan</creator><creator>Wang, Shuzhe</creator><creator>Yang, Yanchao</creator><creator>Fan, Qingnan</creator><creator>Chen, Baoquan</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20241212</creationdate><title>SLAM3R: Real-Time Dense Scene Reconstruction from Monocular RGB Videos</title><author>Liu, Yuzheng ; Dong, Siyan ; Wang, Shuzhe ; Yang, Yanchao ; Fan, Qingnan ; Chen, Baoquan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2412_094013</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Liu, Yuzheng</creatorcontrib><creatorcontrib>Dong, Siyan</creatorcontrib><creatorcontrib>Wang, Shuzhe</creatorcontrib><creatorcontrib>Yang, Yanchao</creatorcontrib><creatorcontrib>Fan, Qingnan</creatorcontrib><creatorcontrib>Chen, Baoquan</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Liu, Yuzheng</au><au>Dong, Siyan</au><au>Wang, Shuzhe</au><au>Yang, Yanchao</au><au>Fan, Qingnan</au><au>Chen, Baoquan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>SLAM3R: Real-Time Dense Scene Reconstruction from Monocular RGB Videos</atitle><date>2024-12-12</date><risdate>2024</risdate><abstract>In this paper, we introduce SLAM3R, a novel and effective monocular RGB SLAM system for real-time and high-quality dense 3D reconstruction. SLAM3R provides an end-to-end solution by seamlessly integrating local 3D reconstruction and global coordinate registration through feed-forward neural networks. Given an input video, the system first converts it into overlapping clips using a sliding window mechanism. Unlike traditional pose optimization-based methods, SLAM3R directly regresses 3D pointmaps from RGB images in each window and progressively aligns and deforms these local pointmaps to create a globally consistent scene reconstruction - all without explicitly solving any camera parameters. Experiments across datasets consistently show that SLAM3R achieves state-of-the-art reconstruction accuracy and completeness while maintaining real-time performance at 20+ FPS. Code and weights at: https://github.com/PKU-VCL-3DV/SLAM3R.</abstract><doi>10.48550/arxiv.2412.09401</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2412.09401
ispartof
issn
language eng
recordid cdi_arxiv_primary_2412_09401
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title SLAM3R: Real-Time Dense Scene Reconstruction from Monocular RGB Videos
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-02T00%3A34%3A30IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=SLAM3R:%20Real-Time%20Dense%20Scene%20Reconstruction%20from%20Monocular%20RGB%20Videos&rft.au=Liu,%20Yuzheng&rft.date=2024-12-12&rft_id=info:doi/10.48550/arxiv.2412.09401&rft_dat=%3Carxiv_GOX%3E2412_09401%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true