Lidar-Monocular Surface Reconstruction Using Line Segments
Structure from Motion (SfM) often fails to estimate accurate poses in environments that lack suitable visual features. In such cases, the quality of the final 3D mesh, which is contingent on the accuracy of those estimates, is reduced. One way to overcome this problem is to combine data from a monoc...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Structure from Motion (SfM) often fails to estimate accurate poses in
environments that lack suitable visual features. In such cases, the quality of
the final 3D mesh, which is contingent on the accuracy of those estimates, is
reduced. One way to overcome this problem is to combine data from a monocular
camera with that of a LIDAR. This allows fine details and texture to be
captured while still accurately representing featureless subjects. However,
fusing these two sensor modalities is challenging due to their fundamentally
different characteristics. Rather than directly fusing image features and LIDAR
points, we propose to leverage common geometric features that are detected in
both the LIDAR scans and image data, allowing data from the two sensors to be
processed in a higher-level space. In particular, we propose to find
correspondences between 3D lines extracted from LIDAR scans and 2D lines
detected in images before performing a bundle adjustment to refine poses. We
also exploit the detected and optimized line segments to improve the quality of
the final mesh. We test our approach on the recently published dataset, Newer
College Dataset. We compare the accuracy and the completeness of the 3D mesh to
a ground truth obtained with a survey-grade 3D scanner. We show that our method
delivers results that are comparable to a state-of-the-art LIDAR survey while
not requiring highly accurate ground truth pose estimates. |
---|---|
DOI: | 10.48550/arxiv.2104.02761 |