CLFNet: a multi-modal data fusion network for traffic sign extraction

When using image data for signage extraction, poor visibility conditions such as insufficient light, rainy days, and low light intensity are encountered, leading to low accuracy and poor boundary segmentation in vision-based detection methods. To address this problem, we propose a cross-modal latent...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Measurement science & technology 2025-01, Vol.36 (1), p.15131
Hauptverfasser: Liu, Rufei, Su, Zhanwen, Zhang, Yi, Li, Ming
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:When using image data for signage extraction, poor visibility conditions such as insufficient light, rainy days, and low light intensity are encountered, leading to low accuracy and poor boundary segmentation in vision-based detection methods. To address this problem, we propose a cross-modal latent feature fusion network for signage detection, which obtains rich boundary information by combining images with light detection and ranging depth images, thus compensating for the pseudo-boundary phenomenon that may occur when using a single RGB image segmentation. First, HRNet is utilized as the backbone network to extract the boundary information of the point cloud depth map and RGB image by introducing the boundary extraction module; Second, the sensitivity to the boundary is enhanced by applying the feature aggregation module to deeply fuse the extracted boundary information with the image features; Finally, boundary Intersection over Union (IOU) is introduced as an evaluation index. The results show that the method performs more superiorly compared to the mainstream RGBD network, with an improvement of 5.5% and 6.1% in IOU and boundary IOU, and an accuracy of 98.3% and 96.2%, respectively, relative to the baseline network.
ISSN:0957-0233
1361-6501
DOI:10.1088/1361-6501/ad95af