Human Detection from 4D Radar Data in Low-Visibility Field Conditions
Autonomous driving technology is increasingly being used on public roads and in industrial settings such as mines. While it is essential to detect pedestrians, vehicles, or other obstacles, adverse field conditions negatively affect the performance of classical sensors such as cameras or lidars. Rad...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Autonomous driving technology is increasingly being used on public roads and
in industrial settings such as mines. While it is essential to detect
pedestrians, vehicles, or other obstacles, adverse field conditions negatively
affect the performance of classical sensors such as cameras or lidars. Radar,
on the other hand, is a promising modality that is less affected by, e.g.,
dust, smoke, water mist or fog. In particular, modern 4D imaging radars provide
target responses across the range, vertical angle, horizontal angle and Doppler
velocity dimensions. We propose TMVA4D, a CNN architecture that leverages this
4D radar modality for semantic segmentation. The CNN is trained to distinguish
between the background and person classes based on a series of 2D projections
of the 4D radar data that include the elevation, azimuth, range, and Doppler
velocity dimensions. We also outline the process of compiling a novel dataset
consisting of data collected in industrial settings with a car-mounted 4D radar
and describe how the ground-truth labels were generated from reference thermal
images. Using TMVA4D on this dataset, we achieve an mIoU score of 78.2% and an
mDice score of 86.1%, evaluated on the two classes background and person |
---|---|
DOI: | 10.48550/arxiv.2404.05307 |