Channel-Hopping Using Reinforcement Learning for Rendezvous in Asymmetric Cognitive Radio Networks

This paper addresses the rendezvous problem in asymmetric cognitive radio networks (CRNs) by proposing a novel reinforcement learning (RL)-based channel-hopping algorithm. Traditional methods like the jump-stay (JS) algorithm, while effective, often struggle with high time-to-rendezvous (TTR) in asy...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Applied sciences 2024-12, Vol.14 (23), p.11369
Hauptverfasser: Jin, Dongsup, Jang, Minho, Jang, Ji-Woong, Kong, Gyuyeol
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:This paper addresses the rendezvous problem in asymmetric cognitive radio networks (CRNs) by proposing a novel reinforcement learning (RL)-based channel-hopping algorithm. Traditional methods like the jump-stay (JS) algorithm, while effective, often struggle with high time-to-rendezvous (TTR) in asymmetric scenarios where secondary users (SUs) have varying channel availability. Our proposed RL-based algorithm leverages the actor-critic policy gradient method to learn optimal channel selection strategies by dynamically adapting to the environment and minimizing TTR. Extensive simulations demonstrate that the RL-based algorithm significantly reduces the expected TTR (ETTR) compared to the JS algorithm, particularly in asymmetric scenarios where M-sequence-based approaches are less effective. This suggests that RL-based approaches not only offer robustness in asymmetric environments but also provide a promising alternative in more predictable settings.
ISSN:2076-3417
2076-3417
DOI:10.3390/app142311369