Trajectory tracking of QUAV based on cascade DRL with feedforward control

The adoption of advanced control strategies has become increasingly critical for the quadrotor unmanned aerial vehicle (QUAV). Deep reinforcement learning (DRL) stands at the forefront of these developments, showcasing significant utility, particularly in the realm of static target tracking for QUAV...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Neurocomputing (Amsterdam) 2025-02, Vol.618, p.129057, Article 129057
Hauptverfasser: He, Shuliang, Han, Haoran, Cheng, Jian
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The adoption of advanced control strategies has become increasingly critical for the quadrotor unmanned aerial vehicle (QUAV). Deep reinforcement learning (DRL) stands at the forefront of these developments, showcasing significant utility, particularly in the realm of static target tracking for QUAVs. However, existing DRL methodologies encounter substantial challenges with latency issues when applied to trajectory tracking scenarios. This paper tackles this challenge by incorporating a feedforward technique, which empowers agents to analyze high order trajectory information. Initially, a cascade DRL controller for fixed point tracking is trained, where multiple agents are separately responsible for translation and rotation movement along each axis. Then, the controller is implemented for trajectory tracking by incorporating the feedforward information from the trajectory, which significantly solves the latency issue. Through extensive tracking demonstrations and quantitative analysis, the proposed integrated scheme demonstrates a significant enhancement in the trajectory tracking performance of QUAVs.
ISSN:0925-2312
DOI:10.1016/j.neucom.2024.129057