ATLAS: Universal Function Approximator for Memory Retention
Artificial neural networks (ANNs), despite their universal function approximation capability and practical success, are subject to catastrophic forgetting. Catastrophic forgetting refers to the abrupt unlearning of a previous task when a new task is learned. It is an emergent phenomenon that hinders...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Artificial neural networks (ANNs), despite their universal function
approximation capability and practical success, are subject to catastrophic
forgetting. Catastrophic forgetting refers to the abrupt unlearning of a
previous task when a new task is learned. It is an emergent phenomenon that
hinders continual learning. Existing universal function approximation theorems
for ANNs guarantee function approximation ability, but do not predict
catastrophic forgetting. This paper presents a novel universal approximation
theorem for multi-variable functions using only single-variable functions and
exponential functions. Furthermore, we present ATLAS: a novel ANN architecture
based on the new theorem. It is shown that ATLAS is a universal function
approximator capable of some memory retention, and continual learning. The
memory of ATLAS is imperfect, with some off-target effects during continual
learning, but it is well-behaved and predictable. An efficient implementation
of ATLAS is provided. Experiments are conducted to evaluate both the function
approximation and memory retention capabilities of ATLAS. |
---|---|
DOI: | 10.48550/arxiv.2208.05388 |