Minimal training set size estimation for neural network-based function approximation

A new approach to the problem of n-dimensional continuous and sampled-data function approximation using a two-layer neural network is presented. The generalized Nyquist theorem is introduced to solve for the optimum number of training examples in n-dimensional input space. Choosing the smallest but...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Malinowski, A., Zurada, J.R., Aronhime, P.B.
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:A new approach to the problem of n-dimensional continuous and sampled-data function approximation using a two-layer neural network is presented. The generalized Nyquist theorem is introduced to solve for the optimum number of training examples in n-dimensional input space. Choosing the smallest but still sufficient set of training vectors results in a reduced learning time for the network. Analytical formulas and algorithm for training set size reduction are developed and illustrated by two-dimensional data examples.< >
DOI:10.1109/ISCAS.1994.409611