Jointly Learning Environments and Control Policies with Projected Stochastic Gradient Ascent
Journal of Artificial Intelligence Research 73 (2022) 117-171 We consider the joint design and control of discrete-time stochastic dynamical systems over a finite time horizon. We formulate the problem as a multi-step optimization problem under uncertainty seeking to identify a system design and a c...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Journal of Artificial Intelligence Research 73 (2022) 117-171 We consider the joint design and control of discrete-time stochastic
dynamical systems over a finite time horizon. We formulate the problem as a
multi-step optimization problem under uncertainty seeking to identify a system
design and a control policy that jointly maximize the expected sum of rewards
collected over the time horizon considered. The transition function, the reward
function and the policy are all parametrized, assumed known and differentiable
with respect to their parameters. We then introduce a deep reinforcement
learning algorithm combining policy gradient methods with model-based
optimization techniques to solve this problem. In essence, our algorithm
iteratively approximates the gradient of the expected return via Monte-Carlo
sampling and automatic differentiation and takes projected gradient ascent
steps in the space of environment and policy parameters. This algorithm is
referred to as Direct Environment and Policy Search (DEPS). We assess the
performance of our algorithm in three environments concerned with the design
and control of a mass-spring-damper system, a small-scale off-grid power system
and a drone, respectively. In addition, our algorithm is benchmarked against a
state-of-the-art deep reinforcement learning algorithm used to tackle joint
design and control problems. We show that DEPS performs at least as well or
better in all three environments, consistently yielding solutions with higher
returns in fewer iterations. Finally, solutions produced by our algorithm are
also compared with solutions produced by an algorithm that does not jointly
optimize environment and policy parameters, highlighting the fact that higher
returns can be achieved when joint optimization is performed. |
---|---|
DOI: | 10.48550/arxiv.2006.01738 |