An optimized Q-Learning algorithm for mobile robot local path planning

The Q-Learning algorithm is a reinforcement learning technique widely used in various fields such as path planning, intelligent transportation, penetration testing, among others. It primarily involves the interaction between an agent and its environment, enabling the agent to learn an optimal strate...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Knowledge-based systems 2024-02, Vol.286, p.111400, Article 111400
Hauptverfasser: Zhou, Qian, Lian, Yang, Wu, Jiayang, Zhu, Mengyue, Wang, Haiyong, Cao, Jinli
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The Q-Learning algorithm is a reinforcement learning technique widely used in various fields such as path planning, intelligent transportation, penetration testing, among others. It primarily involves the interaction between an agent and its environment, enabling the agent to learn an optimal strategy that maximizes cumulative rewards. Most non-agent-based path planning algorithms face challenges in exploring completely unknown environments effectively, lacking efficient perception in unfamiliar settings. Additionally, many Q-Learning-based path planning algorithms suffer from slow convergence and susceptibility to getting stuck in local optimal solutions. To address these issues, an optimized version of the Q-Learning algorithm (Optimized Q-Learning, O-QL) is proposed and applied to local path planning of mobile robots. O-QL introduces novel Q-table initialization methods, incorporates a new action-selection policy, and a new reward function, and adapts the Root Mean Square Propagation (RMSprop) method in the learning rate adjustment. This adjustment dynamically tunes the learning rate based on gradient changes to accelerate learning and enhance path planning efficiency. Simulation experiments are carried out in three maze environments with different complexity levels, and the performance of the algorithm in local path planning is evaluated using steps, exploration reward, learning rate change and running time. The experimental results demonstrate that O-QL exhibits improvements across all four metrics compared to existing algorithms.
ISSN:0950-7051
1872-7409
DOI:10.1016/j.knosys.2024.111400