Automatic discretization of actions and states in Monte-Carlo tree search
While Monte Carlo Tree Search (MCTS) represented a revolution in game related AI research, it is currently unfit for tasks that deal with continuous actions and (often as a consequence) game-states. Recent applications of MCTS to quasi continuous games such as no-limit Poker variants have circumvent...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | While Monte Carlo Tree Search (MCTS) represented a revolution in game related AI research, it is currently unfit for tasks that deal with continuous actions and (often as a consequence) game-states. Recent applications of MCTS to quasi continuous games such as no-limit Poker variants have circumvented this problem by discretizing the action or the state-space. We present Tree Learning Search (TLS) as an alternative to a priori discretization. TLS employs ideas from data stream mining to combine incremental tree induction with MCTS to construct game-state-dependent discretizations that allow MCTS to focus its sampling spread more efficiently on regions of the search space with promising returns. We evaluate TLS on global function optimization problems to illustrate its potential and show results from an early implementation on a full scale no-limit Texas Hold'em Poker bot. |
---|