Robust and efficient task scheduling for robotics applications with reinforcement learning

Effective task scheduling can significantly impact on performance, productivity, and profitability in many real-world settings, such as production lines, logistics, and transportation systems. Traditional approaches to task scheduling rely on heuristics or simple rule-based methods. However, with th...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Engineering applications of artificial intelligence 2024-01, Vol.127, p.107300, Article 107300
Hauptverfasser: Tejer, Mateusz, Szczepanski, Rafal, Tarczewski, Tomasz
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Effective task scheduling can significantly impact on performance, productivity, and profitability in many real-world settings, such as production lines, logistics, and transportation systems. Traditional approaches to task scheduling rely on heuristics or simple rule-based methods. However, with the emergence of machine learning and artificial intelligence, there is growing interest in using these methods to optimize task scheduling. In particular, reinforcement learning is a promising task scheduling approach, because it can learn from experience and adapt to changing conditions. One step often missed or neglected is choosing optimal algorithm parameters and different ways the environment could be implemented. The study analyzes the performance possibilities of task scheduling using reinforcement learning. The deep analysis allows to select highly efficient environment models and Q-learning parameters. Moreover, automatic selection based on optimization algorithms has been proposed. Regardless of the selected optimal parameters, the resilience to environmental changes seems poor. The deducted analysis motivated the Authors to develop a novel Hybrid Q-learning approach. It allows to provide superior efficiency regardless of the environmental parameters. •Optimal scheduling requires a proper environment model.•Sub-optimal Q-learning parameters do not give optimal results.•Hybrid Q-learning approach allows for maintaining optimal scheduling.•The proposed approach is resilient to an non-deterministic environments.
ISSN:0952-1976
1873-6769
DOI:10.1016/j.engappai.2023.107300