A Primer on Maximum Causal Entropy Inverse Reinforcement Learning
Inverse Reinforcement Learning (IRL) algorithms infer a reward function that explains demonstrations provided by an expert acting in the environment. Maximum Causal Entropy (MCE) IRL is currently the most popular formulation of IRL, with numerous extensions. In this tutorial, we present a compressed...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Inverse Reinforcement Learning (IRL) algorithms infer a reward function that
explains demonstrations provided by an expert acting in the environment.
Maximum Causal Entropy (MCE) IRL is currently the most popular formulation of
IRL, with numerous extensions. In this tutorial, we present a compressed
derivation of MCE IRL and the key results from contemporary implementations of
MCE IRL algorithms. We hope this will serve both as an introductory resource
for those new to the field, and as a concise reference for those already
familiar with these topics. |
---|---|
DOI: | 10.48550/arxiv.2203.11409 |