Mastering the Game of Sungka from Random Play
Recent work in reinforcement learning demonstrated that learning solely through self-play is not only possible, but could also result in novel strategies that humans never would have thought of. However, optimization methods cast as a game between two players require careful tuning to prevent subopt...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recent work in reinforcement learning demonstrated that learning solely
through self-play is not only possible, but could also result in novel
strategies that humans never would have thought of. However, optimization
methods cast as a game between two players require careful tuning to prevent
suboptimal results. Hence, we look at random play as an alternative method. In
this paper, we train a DQN agent to play Sungka, a two-player turn-based board
game wherein the players compete to obtain more stones than the other. We show
that even with purely random play, our training algorithm converges very fast
and is stable. Moreover, we test our trained agent against several baselines
and show its ability to consistently win against these. |
---|---|
DOI: | 10.48550/arxiv.1905.07102 |