A Sequential Decision Algorithm of Reinforcement Learning for Composite Action Space

It is the key research object of electronic warfare to use UAV (Unmanned Aerial Vehicle) clusters to carry out electronic countermeasure tasks. The UAV carries loads such as reconnaissance and interference at the same time, which makes it necessary to simultaneously decide multiple types of actions-...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2023, Vol.11, p.107669-107684
Hauptverfasser: Gao, Yuan, Wang, Ye, Zhang, Lei, Guo, Lihong, Li, Jiang, Sun, Shouhong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:It is the key research object of electronic warfare to use UAV (Unmanned Aerial Vehicle) clusters to carry out electronic countermeasure tasks. The UAV carries loads such as reconnaissance and interference at the same time, which makes it necessary to simultaneously decide multiple types of actions-namely, compound actions-which poses a challenge to intelligent decision-making algorithms. Considering the problem of action-space dimensional complexity and weak collaboration between decisions in multi-agent scenarios with composite actions, this study proposed a decision algorithm involving a multi-agent reinforcement-learning sequence, which combined joint composite actions into sequential decision, reducing the difficulty of a single decision and enhancing the collaboration between various agents and their individual decisions. Because long decision sequences required better depth modeling and had high variance, a DeLighT module was added to the naïve transformer model to increase the depth and baseline techniques, which were used to reduce the variance in the value estimation. The simulated results verified the effectiveness of the proposed algorithm in the UAV cooperative combat scenario, where each agent had a composite action space and showed better performance than the existing algorithms.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2023.3320137