A tree-structured adaptive network for function approximation in high-dimensional spaces
Nonlinear function approximation is often solved by finding a set of coefficients for a finite number of fixed nonlinear basis functions. However, if the input data are drawn from a high-dimensional space, the number of required basis functions grows exponentially with dimension, leading many to sug...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on neural networks 1991-03, Vol.2 (2), p.285-293 |
---|---|
1. Verfasser: | |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Nonlinear function approximation is often solved by finding a set of coefficients for a finite number of fixed nonlinear basis functions. However, if the input data are drawn from a high-dimensional space, the number of required basis functions grows exponentially with dimension, leading many to suggest the use of adaptive nonlinear basis functions whose parameters can be determined by iterative methods. The author proposes a technique based on the idea that for most of the data, only a few dimensions of the input may be necessary to compute the desired output function. Additional input dimensions are incorporated only where needed. The learning procedure grows a tree whose structure depends upon the input data and the function to be approximated. This technique has a fast learning algorithm with no local minima once the network shape is fixed, and it can be used to reduce the number of required measurements in situations where there is a cost associated with sensing. Three examples are given: controlling the dynamics of a simulated planar two-joint robot arm, predicting the dynamics of the chaotic Mackey-Glass equation, and predicting pixel values in real images from pixel values above and to the left.< > |
---|---|
ISSN: | 1045-9227 1941-0093 |
DOI: | 10.1109/72.80339 |