Enhanced 3D imaging based on regional optical texture synthesis

Optical information synthesis, which fuses LiDAR and optical cameras, has the potential for highly detailed 3D representations. However, due to the disparity of information density between point clouds and images, conventional matching methods based on points often lose significant information. To a...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Optics express 2025-01, Vol.33 (2), p.2406
Hauptverfasser: Que, Yufei, Ding, Junzhe, Xie, Jie, Wu, Cheng
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Optical information synthesis, which fuses LiDAR and optical cameras, has the potential for highly detailed 3D representations. However, due to the disparity of information density between point clouds and images, conventional matching methods based on points often lose significant information. To address this issue, we propose a regional matching method to bridge the differences in information density between point clouds and images. In detail, fine semantic regions are extracted from images by analyzing their gradients. Simultaneously, point clouds are transformed into meshes, where each facet corresponds to a coarse semantic region. Extrinsic matrices are used to unify the point cloud coordinate system with the image coordinate system. The mesh is further subdivided based on the guidance of image texture information to create regional matching units. Within each matching unit, the information density of the point cloud and the image is carefully balanced at a semantic level. The texture features of the image are well preserved in the transformed mesh structure. Consequently, the proposed texture synthesis method significantly enhances the overall quality and realism of the 3D imaging.
ISSN:1094-4087
1094-4087
DOI:10.1364/OE.541246