Inhibitory unlearning: a mechanism for increasing the storage capacity in an attractor network
Attractor networks with and without learning dynamics have been proposed as models for the formation of neural assemblies. For this work, we have used an attractor-recurrent network that builds internal representations of input stimuli as assemblies of neurons. This network has an ongoing, human-lik...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Attractor networks with and without learning dynamics have been proposed as models for the formation of neural assemblies. For this work, we have used an attractor-recurrent network that builds internal representations of input stimuli as assemblies of neurons. This network has an ongoing, human-like learning, integrating new information into what it already knows. This sequential learning process has two fundamental underlying problems: the limited network storage capacity and catastrophic forgetting. During learning, the network performance decreases: the network wastes more time learning new stimuli, new assemblies are smaller and the capacity for recuperation decreases. In trying to solve this, we suggest a mechanism based on the unlearning of inhibitory connections. |
---|---|
DOI: | 10.1109/KES.2000.885786 |