Finite-Sample Bounds for Adaptive Inverse Reinforcement Learning using Passive Langevin Dynamics
This paper provides a finite-sample analysis of a passive stochastic gradient Langevin dynamics algorithm (PSGLD) designed to achieve adaptive inverse reinforcement learning (IRL). By passive, we mean that the noisy gradients available to the PSGLD algorithm (inverse learning process) are evaluated...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This paper provides a finite-sample analysis of a passive stochastic gradient
Langevin dynamics algorithm (PSGLD) designed to achieve adaptive inverse
reinforcement learning (IRL). By passive, we mean that the noisy gradients
available to the PSGLD algorithm (inverse learning process) are evaluated at
randomly chosen points by an external stochastic gradient algorithm (forward
learner) that aims to optimize a cost function. The PSGLD algorithm acts as a
randomized sampler to achieve adaptive IRL by reconstructing this cost function
nonparametrically from the stationary measure of a Langevin diffusion. Previous
work has analyzed the asymptotic performance of this passive algorithm using
weak convergence techniques. This paper analyzes the non-asymptotic
(finite-sample) performance using a logarithmic-Sobolev inequality and the
Otto-Villani Theorem. We obtain finite-sample bounds on the 2-Wasserstein
distance between the estimates generated by the PSGLD algorithm and the cost
function. Apart from achieving finite-sample guarantees for adaptive IRL, this
work extends a line of research in analysis of passive stochastic gradient
algorithms to the finite-sample regime for Langevin dynamics. |
---|---|
DOI: | 10.48550/arxiv.2304.09123 |