Practical Resolution Methods for MDPs in Robotics Exemplified With Disassembly Planning
In this letter, we focus on finding practical resolution methods for Markov decision processes (MDPs) in robotics. Some of the main difficulties of applying MDPs to real-world robotics problems are: first, having to deal with huge state spaces; and second, designing a method that is robust enough to...
Gespeichert in:
Veröffentlicht in: | IEEE robotics and automation letters 2019-07, Vol.4 (3), p.2282-2288 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this letter, we focus on finding practical resolution methods for Markov decision processes (MDPs) in robotics. Some of the main difficulties of applying MDPs to real-world robotics problems are: first, having to deal with huge state spaces; and second, designing a method that is robust enough to dead ends. These complications restrict or make more difficult the application of methods, such as value iteration, policy iteration, or labeled real-time dynamic programming (LRTDP). We see in determinization and heuristic search a way to successfully work around these problems. In addition, we believe that many practical use cases offer the opportunity to identify hierarchies of subtasks and solve smaller, simplified problems. We propose a decision-making unit that operates in a probabilistic planning setting through stochastic shortest path problems, which generalize the most common types of MDPs. Our decision-making unit combines: first, automatic hierarchical organization of subtasks; and second, on-line resolution via determinization. We argue that several applications of planning benefit from these two strategies. We exemplify our approach with a robotized disassembly application. The disassembly problem is modeled in probabilistic planning definition language, and serves to define our experiments. Our results show many advantages of our method over LRTDP, such as a better capability to handle problems with large state spaces and state definitions that change when new fluents are discovered. |
---|---|
ISSN: | 2377-3766 2377-3766 |
DOI: | 10.1109/LRA.2019.2901905 |