Q-FANET: Improved Q-learning based routing protocol for FANETs
Flying Ad-Hoc Networks (FANETs) introduce ad-hoc networking into the context of flying nodes, allowing real-time communication between these nodes and ground control stations. Due to the nature of this kind of node, the structure of a FANET is dynamic, changing very often. Since it has applications...
Gespeichert in:
Veröffentlicht in: | Computer networks (Amsterdam, Netherlands : 1999) Netherlands : 1999), 2021-10, Vol.198, p.108379, Article 108379 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Flying Ad-Hoc Networks (FANETs) introduce ad-hoc networking into the context of flying nodes, allowing real-time communication between these nodes and ground control stations. Due to the nature of this kind of node, the structure of a FANET is dynamic, changing very often. Since it has applications in military scenarios and other mission-critical systems, an agile and reliable network is essential with robust and efficient routing protocols. Nonetheless, maintaining an acceptable network delay generated by the selection of routes remains a considerable challenge, owing to the nodes’ high mobility. This article addresses this problem by proposing a routing scheme based on an improved Q-Learning algorithm to reduce network delay in scenarios with high-mobility, called Q-FANET. This proposal has its performance evaluated and compared with other state-of-the-art methods using the WSNET simulator. The experiments provide evidence that the Q-FANET presents lower delay, a minor increase in packet delivery ratio, and significant lower jitter compared with other reinforcement learning-based routing protocols. |
---|---|
ISSN: | 1389-1286 1872-7069 |
DOI: | 10.1016/j.comnet.2021.108379 |