Neuroplastic Expansion in Deep Reinforcement Learning
The loss of plasticity in learning agents, analogous to the solidification of neural pathways in biological brains, significantly impedes learning and adaptation in reinforcement learning due to its non-stationary nature. To address this fundamental challenge, we propose a novel approach, Neuroplast...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The loss of plasticity in learning agents, analogous to the solidification of
neural pathways in biological brains, significantly impedes learning and
adaptation in reinforcement learning due to its non-stationary nature. To
address this fundamental challenge, we propose a novel approach, Neuroplastic
Expansion (NE), inspired by cortical expansion in cognitive science. NE
maintains learnability and adaptability throughout the entire training process
by dynamically growing the network from a smaller initial size to its full
dimension. Our method is designed with three key components: (1) elastic neuron
generation based on potential gradients, (2) dormant neuron pruning to optimize
network expressivity, and (3) neuron consolidation via experience review to
strike a balance in the plasticity-stability dilemma. Extensive experiments
demonstrate that NE effectively mitigates plasticity loss and outperforms
state-of-the-art methods across various tasks in MuJoCo and DeepMind Control
Suite environments. NE enables more adaptive learning in complex, dynamic
environments, which represents a crucial step towards transitioning deep
reinforcement learning from static, one-time training paradigms to more
flexible, continually adapting models. |
---|---|
DOI: | 10.48550/arxiv.2410.07994 |