Beyond Winning and Losing: Modeling Human Motivations and Behaviors Using Inverse Reinforcement Learning
In recent years, reinforcement learning (RL) methods have been applied to model gameplay with great success, achieving super-human performance in various environments, such as Atari, Go, and Poker. However, those studies mostly focus on winning the game and have largely ignored the rich and complex...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In recent years, reinforcement learning (RL) methods have been applied to
model gameplay with great success, achieving super-human performance in various
environments, such as Atari, Go, and Poker. However, those studies mostly focus
on winning the game and have largely ignored the rich and complex human
motivations, which are essential for understanding different players' diverse
behaviors. In this paper, we present a novel method called Multi-Motivation
Behavior Modeling (MMBM) that takes the multifaceted human motivations into
consideration and models the underlying value structure of the players using
inverse RL. Our approach does not require the access to the dynamic of the
system, making it feasible to model complex interactive environments such as
massively multiplayer online games. MMBM is tested on the World of Warcraft
Avatar History dataset, which recorded over 70,000 users' gameplay spanning
three years period. Our model reveals the significant difference of value
structures among different player groups. Using the results of motivation
modeling, we also predict and explain their diverse gameplay behaviors and
provide a quantitative assessment of how the redesign of the game environment
impacts players' behaviors. |
---|---|
DOI: | 10.48550/arxiv.1807.00366 |