Object Classification Based on Enhanced Evidence Theory: Radar-Vision Fusion Approach for Roadside Application
Roadside object detection and classification provide a good understanding of driving scenarios in regard to over-the-horizon perception. However, typical roadside sensors are insufficient when used separately. The fusion of the millimeter-wave (MMW) radar and monovision camera serves as an efficient...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on instrumentation and measurement 2022, Vol.71, p.1-12 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Roadside object detection and classification provide a good understanding of driving scenarios in regard to over-the-horizon perception. However, typical roadside sensors are insufficient when used separately. The fusion of the millimeter-wave (MMW) radar and monovision camera serves as an efficient approach. Unfortunately, the uncertain and conflicting data in extreme light conditions pose challenges to the fusion process. To this end, this study proposed an evidential framework to fuse the radar and camera data. A novel modeling approach for basic belief assignments (BBAs) was proposed, which took the uncertainty of convolutional neural network (CNN) model into consideration. Moreover, the single-scan and multiscan fusion methods were developed based on the enhanced evidence theory, which utilized different weighted coefficients by referring to the reinforced belief (RB) divergence measure and belief entropy (BE). Both numerical and empirical experiments were conducted to investigate the method performance. Specifically, in numerical experiments, the belief value of actual classification increased to 99.01%. For empirical experiments, based on the real datasets collected by roadside devices, the proposed method was demonstrated to outperform the state-of-the-art ones in terms of 71.06% and 87.23% precisions for bright light and low illumination conditions, respectively. The results verify that the proposed method is effective in fusing the conflicting and uncertain data. |
---|---|
ISSN: | 0018-9456 1557-9662 |
DOI: | 10.1109/TIM.2022.3154001 |