Nonlinear Principal Component Analysis by Neural Networks: Theory and Application to the Lorenz System

A nonlinear generalization of principal component analysis (PCA), denoted nonlinear principal component analysis (NLPCA), is implemented in a variational framework using a five-layer autoassociative feed-forward neural network. The method is tested on a dataset sampled from the Lorenz attractor, and...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of climate 2000-02, Vol.13 (4), p.821-835
1. Verfasser: Monahan, Adam H.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:A nonlinear generalization of principal component analysis (PCA), denoted nonlinear principal component analysis (NLPCA), is implemented in a variational framework using a five-layer autoassociative feed-forward neural network. The method is tested on a dataset sampled from the Lorenz attractor, and it is shown that the NLPCA approximations to the attractor in one and two dimensions, explaining 76% and 99.5% of the variance, respectively, are superior to the corresponding PCA approximations, which respectively explain 60% (mode 1) and 95% (modes 1 and 2) of the variance. It is found that as noise is added to the Lorenz attractor, the NLPCA approximations remain superior to the PCA approximations until the noise level is so great that the lower-dimensional nonlinear structure of the data is no longer manifest to the eye. Finally, directions for future work are presented, and a cinematographic technique to visualize the results of NLPCA is discussed.
ISSN:0894-8755
1520-0442
DOI:10.1175/1520-0442(2000)013<0821:NPCABN>2.0.CO;2