A constrained-optimization approach to training neural networks for smooth function approximation and system identification
A constrained-backpropagation training technique is presented to suppress interference and preserve prior knowledge in sigmoidal neural networks, while new information is learned incrementally. The technique is based on constrained optimization, and minimizes an error function subject to a set of eq...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | A constrained-backpropagation training technique is presented to suppress interference and preserve prior knowledge in sigmoidal neural networks, while new information is learned incrementally. The technique is based on constrained optimization, and minimizes an error function subject to a set of equality constraints derived via an algebraic training approach. As a result, sigmoidal neural networks with long term procedural memory (also known as implicit knowledge) can be obtained and trained repeatedly on line, without experiencing interference. The generality and effectiveness of this approach is demonstrated through three applications, namely, function approximation, solution of differential equations, and system identification. The results show that the long term memory is maintained virtually intact, and may lead to computational savings because the implicit knowledge provides a lasting performance baseline for the neural network. |
---|---|
ISSN: | 2161-4393 1522-4899 2161-4407 |
DOI: | 10.1109/IJCNN.2008.4634124 |