A novel dynamic multi-objective task scheduling optimization based on Dueling DQN and PER
Task scheduling (TS) in cloud computing is a complex problem that involves balancing workload distribution, resource allocation, and power consumption. Existing methods often fail to optimize these objectives simultaneously and efficiently. This paper introduces a novel technique for scheduling inde...
Gespeichert in:
Veröffentlicht in: | The Journal of supercomputing 2023-12, Vol.79 (18), p.21368-21423 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Task scheduling (TS) in cloud computing is a complex problem that involves balancing workload distribution, resource allocation, and power consumption. Existing methods often fail to optimize these objectives simultaneously and efficiently. This paper introduces a novel technique for scheduling independent tasks in cloud computing using multi-objective optimization and deep reinforcement learning (DRL). The proposed technique, DMOTS-DRL, combines Dueling deep Q-networks and dynamic prioritized experience replay to optimize two critical objectives: scheduling completion time (makespan) and power consumption. The performance of DMOTS-DRL is evaluated using CloudSim and compared with several state-of-the-art TS algorithms. The experimental results show that DMOTS-DRL outperforms the other algorithms in reducing makespan, power consumption, and other metrics, demonstrating its effectiveness and reliability for cloud computing services. Specifically, DMOTS-DRL achieves percentage improvements ranging from − 44.04 to − 0.19% in makespan, from − 0.26 to − 27.90% in power consumption, as well as better performance on other metrics such as energy consumption, degree of imbalance, resource utilization, and average waiting time. |
---|---|
ISSN: | 0920-8542 1573-0484 |
DOI: | 10.1007/s11227-023-05489-5 |