Tracking-Assisted Object Detection with Event Cameras
Event-based object detection has recently garnered attention in the computer vision community due to the exceptional properties of event cameras, such as high dynamic range and no motion blur. However, feature asynchronism and sparsity cause invisible objects due to no relative motion to the camera,...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Event-based object detection has recently garnered attention in the computer
vision community due to the exceptional properties of event cameras, such as
high dynamic range and no motion blur. However, feature asynchronism and
sparsity cause invisible objects due to no relative motion to the camera,
posing a significant challenge in the task. Prior works have studied various
implicit-learned memories to retain as many temporal cues as possible. However,
implicit memories still struggle to preserve long-term features effectively. In
this paper, we consider those invisible objects as pseudo-occluded objects and
aim to detect them by tracking through occlusions. Firstly, we introduce the
visibility attribute of objects and contribute an auto-labeling algorithm to
not only clean the existing event camera dataset but also append additional
visibility labels to it. Secondly, we exploit tracking strategies for
pseudo-occluded objects to maintain their permanence and retain their bounding
boxes, even when features have not been available for a very long time. These
strategies can be treated as an explicit-learned memory guided by the tracking
objective to record the displacements of objects across frames. Lastly, we
propose a spatio-temporal feature aggregation module to enrich the latent
features and a consistency loss to increase the robustness of the overall
pipeline. We conduct comprehensive experiments to verify our method's
effectiveness where still objects are retained, but real occluded objects are
discarded. The results demonstrate that (1) the additional visibility labels
can assist in supervised training, and (2) our method outperforms
state-of-the-art approaches with a significant improvement of 7.9% absolute
mAP. |
---|---|
DOI: | 10.48550/arxiv.2403.18330 |