Asymptotic error bounds for kernel-based Nyström low-rank approximation matrices
Many kernel-based learning algorithms have the computational load scaled with the sample size n due to the column size of a full kernel Gram matrix K. This article considers the Nyström low-rank approximation. It uses a reduced kernel K̂, which is n×m, consisting of m columns (say columns i1,i2,⋯,im...
Gespeichert in:
Veröffentlicht in: | Journal of multivariate analysis 2013-09, Vol.120, p.102-119 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Many kernel-based learning algorithms have the computational load scaled with the sample size n due to the column size of a full kernel Gram matrix K. This article considers the Nyström low-rank approximation. It uses a reduced kernel K̂, which is n×m, consisting of m columns (say columns i1,i2,⋯,im) randomly drawn from K. This approximation takes the form K≈K̂U−1K̂T, where U is the reduced m×m matrix formed by rows i1,i2,⋯,im of K̂. Often m is much smaller than the sample size n resulting in a thin rectangular reduced kernel, and it leads to learning algorithms scaled with the column size m. The quality of matrix approximations can be assessed by the closeness of their eigenvalues and eigenvectors. In this article, asymptotic error bounds on eigenvalues and eigenvectors are derived for the Nyström low-rank approximation matrix.
•Many kernel-based learning algorithms have the computational load.•The Nyström low-rank approximation is designed for reducing the computation.•We propose the spectrum decomposition condition with a theoretical justification.•Asymptotic error bounds on eigenvalues and eigenvectors are derived.•Numerical experiments are provided for covariance kernel and Wishart matrix. |
---|---|
ISSN: | 0047-259X 1095-7243 |
DOI: | 10.1016/j.jmva.2013.05.006 |