Winning Isn't Everything: Enhancing Game Development with Intelligent Agents
Recently, there have been several high-profile achievements of agents learning to play games against humans and beat them. In this paper, we study the problem of training intelligent agents in service of game development. Unlike the agents built to "beat the game", our agents aim to produc...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recently, there have been several high-profile achievements of agents
learning to play games against humans and beat them. In this paper, we study
the problem of training intelligent agents in service of game development.
Unlike the agents built to "beat the game", our agents aim to produce
human-like behavior to help with game evaluation and balancing. We discuss two
fundamental metrics based on which we measure the human-likeness of agents,
namely skill and style, which are multi-faceted concepts with practical
implications outlined in this paper. We report four case studies in which the
style and skill requirements inform the choice of algorithms and metrics used
to train agents; ranging from A* search to state-of-the-art deep reinforcement
learning. We, further, show that the learning potential of state-of-the-art
deep RL models does not seamlessly transfer from the benchmark environments to
target ones without heavily tuning their hyperparameters, leading to linear
scaling of the engineering efforts and computational cost with the number of
target domains. |
---|---|
DOI: | 10.48550/arxiv.1903.10545 |