Bayesian Inverse Reinforcement Learning for Collective Animal Movement
Agent-based methods allow for defining simple rules that generate complex group behaviors. The governing rules of such models are typically set a priori and parameters are tuned from observed behavior trajectories. Instead of making simplifying assumptions across all anticipated scenarios, inverse r...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Agent-based methods allow for defining simple rules that generate complex
group behaviors. The governing rules of such models are typically set a priori
and parameters are tuned from observed behavior trajectories. Instead of making
simplifying assumptions across all anticipated scenarios, inverse reinforcement
learning provides inference on the short-term (local) rules governing long term
behavior policies by using properties of a Markov decision process. We use the
computationally efficient linearly-solvable Markov decision process to learn
the local rules governing collective movement for a simulation of the self
propelled-particle (SPP) model and a data application for a captive guppy
population. The estimation of the behavioral decision costs is done in a
Bayesian framework with basis function smoothing. We recover the true costs in
the SPP simulation and find the guppies value collective movement more than
targeted movement toward shelter. |
---|---|
DOI: | 10.48550/arxiv.2009.04003 |