Reducing the Deployment-Time Inference Control Costs of Deep Reinforcement Learning Agents via an Asymmetric Architecture
Deep reinforcement learning (DRL) has been demonstrated to provide promising results in several challenging decision making and control tasks. However, the required inference costs of deep neural networks (DNNs) could prevent DRL from being applied to mobile robots which cannot afford high energy-co...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Deep reinforcement learning (DRL) has been demonstrated to provide promising
results in several challenging decision making and control tasks. However, the
required inference costs of deep neural networks (DNNs) could prevent DRL from
being applied to mobile robots which cannot afford high energy-consuming
computations. To enable DRL methods to be affordable in such energy-limited
platforms, we propose an asymmetric architecture that reduces the overall
inference costs via switching between a computationally expensive policy and an
economic one. The experimental results evaluated on a number of representative
benchmark suites for robotic control tasks demonstrate that our method is able
to reduce the inference costs while retaining the agent's overall performance. |
---|---|
DOI: | 10.48550/arxiv.2105.14471 |