Ensuring UAV Safety: A Vision-only and Real-time Framework for Collision Avoidance Through Object Detection, Tracking, and Distance Estimation
In the last twenty years, unmanned aerial vehicles (UAVs) have garnered growing interest due to their expanding applications in both military and civilian domains. Detecting non-cooperative aerial vehicles with efficiency and estimating collisions accurately are pivotal for achieving fully autonomou...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In the last twenty years, unmanned aerial vehicles (UAVs) have garnered
growing interest due to their expanding applications in both military and
civilian domains. Detecting non-cooperative aerial vehicles with efficiency and
estimating collisions accurately are pivotal for achieving fully autonomous
aircraft and facilitating Advanced Air Mobility (AAM). This paper presents a
deep-learning framework that utilizes optical sensors for the detection,
tracking, and distance estimation of non-cooperative aerial vehicles. In
implementing this comprehensive sensing framework, the availability of depth
information is essential for enabling autonomous aerial vehicles to perceive
and navigate around obstacles. In this work, we propose a method for estimating
the distance information of a detected aerial object in real time using only
the input of a monocular camera. In order to train our deep learning components
for the object detection, tracking and depth estimation tasks we utilize the
Amazon Airborne Object Tracking (AOT) Dataset. In contrast to previous
approaches that integrate the depth estimation module into the object detector,
our method formulates the problem as image-to-image translation. We employ a
separate lightweight encoder-decoder network for efficient and robust depth
estimation. In a nutshell, the object detection module identifies and localizes
obstacles, conveying this information to both the tracking module for
monitoring obstacle movement and the depth estimation module for calculating
distances. Our approach is evaluated on the Airborne Object Tracking (AOT)
dataset which is the largest (to the best of our knowledge) air-to-air airborne
object dataset. |
---|---|
DOI: | 10.48550/arxiv.2405.06749 |