MDT3D: Multi-Dataset Training for LiDAR 3D Object Detection Generalization
Supervised 3D Object Detection models have been displaying increasingly better performance in single-domain cases where the training data comes from the same environment and sensor as the testing data. However, in real-world scenarios data from the target domain may not be available for finetuning o...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Supervised 3D Object Detection models have been displaying increasingly
better performance in single-domain cases where the training data comes from
the same environment and sensor as the testing data. However, in real-world
scenarios data from the target domain may not be available for finetuning or
for domain adaptation methods. Indeed, 3D object detection models trained on a
source dataset with a specific point distribution have shown difficulties in
generalizing to unseen datasets. Therefore, we decided to leverage the
information available from several annotated source datasets with our
Multi-Dataset Training for 3D Object Detection (MDT3D) method to increase the
robustness of 3D object detection models when tested in a new environment with
a different sensor configuration. To tackle the labelling gap between datasets,
we used a new label mapping based on coarse labels. Furthermore, we show how we
managed the mix of datasets during training and finally introduce a new
cross-dataset augmentation method: cross-dataset object injection. We
demonstrate that this training paradigm shows improvements for different types
of 3D object detection models. The source code and additional results for this
research project will be publicly available on GitHub for interested parties to
access and utilize: https://github.com/LouisSF/MDT3D |
---|---|
DOI: | 10.48550/arxiv.2308.01000 |