A Primer on Maximum Causal Entropy Inverse Reinforcement Learning

Inverse Reinforcement Learning (IRL) algorithms infer a reward function that explains demonstrations provided by an expert acting in the environment. Maximum Causal Entropy (MCE) IRL is currently the most popular formulation of IRL, with numerous extensions. In this tutorial, we present a compressed...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2022-03
Hauptverfasser: Gleave, Adam, Toyer, Sam
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Inverse Reinforcement Learning (IRL) algorithms infer a reward function that explains demonstrations provided by an expert acting in the environment. Maximum Causal Entropy (MCE) IRL is currently the most popular formulation of IRL, with numerous extensions. In this tutorial, we present a compressed derivation of MCE IRL and the key results from contemporary implementations of MCE IRL algorithms. We hope this will serve both as an introductory resource for those new to the field, and as a concise reference for those already familiar with these topics.
ISSN:2331-8422