A mobile robot controller using reinforcement learning under scLTL specifications with uncertainties

The control of a mobile robot is a well‐studied problem in robotics that can be solved easily when performing simple tasks without uncertainties. However, uncertainties always exist when a mobile robot performs a series of tasks in the real‐world. In this paper, we proposed a reinforcement learning...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Asian journal of control 2022-11, Vol.24 (6), p.2916-2930
Hauptverfasser: Mi, Jian, Kuze, Naomi, Ushio, Toshimitsu
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The control of a mobile robot is a well‐studied problem in robotics that can be solved easily when performing simple tasks without uncertainties. However, uncertainties always exist when a mobile robot performs a series of tasks in the real‐world. In this paper, we proposed a reinforcement learning (RL)‐based controller for a mobile robot to perform a task with uncertainties. We consider the case where a task consists of several subtasks described by syntactically co‐safe linear temporal logic (scLTL) specifications and each scLTL specification is transformed into a finite state automaton (FSA) that accepts all behavior satisfying the scLTL specification. A reinforcement learning with an FSA‐encoder (RLwF) method is proposed to learn an optimal control policy in performing tasks in an environment with uncertainties. We propose an RL‐based control method with uncertainties to learn rapidly an optimal policy against the uncertainties such as jammers that prevent a mobile robot from performing specified tasks. By simulation, we demonstrate that the proposed controller can learn an optimal policy and generate an optimal path to perform the designed tasks.
ISSN:1561-8625
1934-6093
DOI:10.1002/asjc.2712