Constrained Meta Agnostic Reinforcement Learning
Meta-Reinforcement Learning (Meta-RL) aims to acquire meta-knowledge for quick adaptation to diverse tasks. However, applying these policies in real-world environments presents a significant challenge in balancing rapid adaptability with adherence to environmental constraints. Our novel approach, Co...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Meta-Reinforcement Learning (Meta-RL) aims to acquire meta-knowledge for
quick adaptation to diverse tasks. However, applying these policies in
real-world environments presents a significant challenge in balancing rapid
adaptability with adherence to environmental constraints. Our novel approach,
Constraint Model Agnostic Meta Learning (C-MAML), merges meta learning with
constrained optimization to address this challenge. C-MAML enables rapid and
efficient task adaptation by incorporating task-specific constraints directly
into its meta-algorithm framework during the training phase. This fusion
results in safer initial parameters for learning new tasks. We demonstrate the
effectiveness of C-MAML in simulated locomotion with wheeled robot tasks of
varying complexity, highlighting its practicality and robustness in dynamic
environments. |
---|---|
DOI: | 10.48550/arxiv.2406.14047 |