Incentive Learning in Monte Carlo Tree Search

Monte Carlo tree search (MCTS) is a search paradigm that has been remarkably successful in computer games like Go. It uses Monte Carlo simulation to evaluate the values of nodes in a search tree. The node values are then used to select the actions during subsequent simulations. The performance of MC...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on computational intelligence and AI in games. 2013-12, Vol.5 (4), p.346-352
Hauptverfasser: Kao, Kuo-Yuan, Wu, I-Chen, Yen, Shi-Jim, Shan, Yi-Chang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Monte Carlo tree search (MCTS) is a search paradigm that has been remarkably successful in computer games like Go. It uses Monte Carlo simulation to evaluate the values of nodes in a search tree. The node values are then used to select the actions during subsequent simulations. The performance of MCTS heavily depends on the quality of its default policy, which guides the simulations beyond the search tree. In this paper, we propose an MCTS improvement, called incentive learning, which learns the default policy online. This new default policy learning scheme is based on ideas from combinatorial game theory, and hence is particularly useful when the underlying game is a sum of games. To illustrate the efficiency of incentive learning, we describe a game named Heap-Go and present experimental results on the game.
ISSN:1943-068X
1943-0698
DOI:10.1109/TCIAIG.2013.2248086