Drone Navigation Using Region and Edge Exploitation-Based Deep CNN

Drones are unmanned aerial vehicles (UAV) utilized for a broad range of functions, including delivery, aerial surveillance, traffic monitoring, architecture monitoring, and even War-field. Drones confront significant obstacles while navigating independently in complex and highly dynamic environments...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2022, Vol.10, p.95441-95450
Hauptverfasser: Arshad, Muhammad Arif, Khan, Saddam Hussain, Qamar, Suleman, Khan, Muhammad Waleed, Murtza, Iqbal, Gwak, Jeonghwan, Khan, Asifullah
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Drones are unmanned aerial vehicles (UAV) utilized for a broad range of functions, including delivery, aerial surveillance, traffic monitoring, architecture monitoring, and even War-field. Drones confront significant obstacles while navigating independently in complex and highly dynamic environments. Moreover, the targeted objects within a dynamic environment have irregular morphology, occlusion, and minor contrast variation with the background. In this regard, a novel deep Convolutional Neural Network(CNN) based data-driven strategy is proposed for drone navigation in the complex and dynamic environment. The proposed Drone Split-Transform-and-Merge Region-and-Edge (Drone-STM-RENet) CNN is comprised of convolutional blocks where each block methodically implements region and edge operations to preserve a diverse set of targeted properties at multi-levels, especially in the congested environment. In each block, the systematic implementation of the average and max-pooling operations can deal with the region homogeneity and edge properties. Additionally, these convolutional blocks are merged at a multi-level to learn texture variation that efficiently discriminates the target from the background and helps obstacle avoidance. Finally, the Drone-STM-RENet generates steering angle and collision probability for each input image to control the drone moving while avoiding hindrances and allowing the UAV to spot risky situations and respond quickly, respectively. The proposed Drone-STM-RENet has been validated on two urban cars and bicycles datasets: udacity and collision-sequence, and achieved considerable performance in terms of explained variance (0.99), recall (95.47%), accuracy (96.26%), and F-score (91.95%). The promising performance of Drone-STM-RENet on urban road datasets suggests that the proposed model is generalizable and can be deployed for real-time autonomous drones navigation and real-world flights.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2022.3204876