Hierarchical Reinforcement Learning-Based Joint Allocation of Jamming Task and Power for Countering Networked Radar

The detection fusion and anti-jamming of a networked radar (NR) create a significant dynamic game between the NR and the jammer, making immediate jamming strategies that maximize the current jamming benefit unsuitable. Therefore, this paper proposes a long-term joint optimization problem of jamming...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on aerospace and electronic systems 2024-09, p.1-19
Hauptverfasser: Wang, Yuedong, Liang, Yan, Wang, Zengfu
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The detection fusion and anti-jamming of a networked radar (NR) create a significant dynamic game between the NR and the jammer, making immediate jamming strategies that maximize the current jamming benefit unsuitable. Therefore, this paper proposes a long-term joint optimization problem of jamming tasks and power allocation in the NR anti-jamming fusion game. Specifically, a jammer is utilized to disrupt the joint detection capability of the NR, which possesses detection fusion and jamming suppression mechanisms. By simulating task decomposition and hierarchical control ideas from human decision-making processes, a hierarchical reinforcement learning-based jamming resource allocation scheme is established. This scheme designs a hierarchical policy network with a shared evaluation network, achieving joint optimization of hybrid discrete (jamming task) and continuous (jamming power) control variables. The value loss, top-level policy loss, and low-level policy loss, which are constructed based on the total reward, are optimized to update the parameters of the evaluation network and hierarchical policy network, thereby improving allocation strategies. Moreover, the state features and total reward are suitably designed based on the jamming mission to aid the jammer's strategy exploration. Finally, our approach is compared with state-of-the-art deep reinforcement learning (DRL) algorithms and timely optimization methods, demonstrating superior jamming performance and shorter decision-making time under typical parameters considering radar deployment and formation motion.
ISSN:0018-9251
1557-9603
DOI:10.1109/TAES.2024.3467041