Q-learning based task scheduling and energy-saving MAC protocol for wireless sensor networkss

The primary problem for a resource-limited Wireless Sensor Network is how to extend the system reliability without sacrificing system performance like reception rate and network connectivity. This method effectively deploys sensor nodes and ensures connectivity between the nodes and the transceiver...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Wireless networks 2024-08, Vol.30 (6), p.4989-5005
Hauptverfasser: Jaber, Mustafa Musa, Ali, Mohammed Hassan, Abd, Sura Khalil, Jassim, Mustafa Mohammed, Alkhayyat, Ahmed, Jassim, Mohammed, Alkhuwaylidee, Ahmed Rashid, Nidhal, Lahib
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The primary problem for a resource-limited Wireless Sensor Network is how to extend the system reliability without sacrificing system performance like reception rate and network connectivity. This method effectively deploys sensor nodes and ensures connectivity between the nodes and the transceiver station. Reinforcement Learning (RL) can effectively schedule the sensor nodes’ unsupervised activities. A Nash Q-Learning inspired node task scheduling (QL-TS) for service and connection management has been described in this work. An energy-saving MAC Protocol (ESMACP) has been developed to extend the system lifetime. The primary aim of this model is to use the suggested QL-TS-ESMACP that allows sensor devices to learn their best action with minimal energy. The correctness and dependability of the QL-TS-ESMACP can be demonstrated by comparing it with other existing methods. QL-TS-ESMACP outperforms other models in terms of energy efficiency, coverage, node lifespan, and packet delivery ratio in the simulation.
ISSN:1022-0038
1572-8196
DOI:10.1007/s11276-022-03184-6