Pixel-based data fusion for a better object detection in automotive applications

The proposed technique addresses a fusion method of two imaging sensors on pixel-level. The fused image will provide a scene representation which is robust against illumination changes and different weather conditions. Thus, the combination of the advantages of each camera will extend the capabiliti...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Thomanek, J, Lietz, H, Wanielik, G
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The proposed technique addresses a fusion method of two imaging sensors on pixel-level. The fused image will provide a scene representation which is robust against illumination changes and different weather conditions. Thus, the combination of the advantages of each camera will extend the capabilities for many computer vision applications, such as video surveillance and automatic object recognition. The presented pixel-based fusion technique is examined on the images of two sensors, a far-infrared (FIR) light camera and a visible light camera which are built-in a vehicle. The sensor images are first decomposed using the Dyadic Wavelet Transform. The transformed data are combined in the wavelet domain controlled by a "goal-oriented" fusion rule. Finally, the fused wavelet representation image will be processed by a pedestrian detection system.
DOI:10.1109/ICICISYS.2010.5658327