Asymptotic error bounds for kernel-based Nyström low-rank approximation matrices

Many kernel-based learning algorithms have the computational load scaled with the sample size n due to the column size of a full kernel Gram matrix K. This article considers the Nyström low-rank approximation. It uses a reduced kernel K̂, which is n×m, consisting of m columns (say columns i1,i2,⋯,im...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of multivariate analysis 2013-09, Vol.120, p.102-119
Hauptverfasser: Chang, Lo-Bin, Bai, Zhidong, Huang, Su-Yun, Hwang, Chii-Ruey
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 119
container_issue
container_start_page 102
container_title Journal of multivariate analysis
container_volume 120
creator Chang, Lo-Bin
Bai, Zhidong
Huang, Su-Yun
Hwang, Chii-Ruey
description Many kernel-based learning algorithms have the computational load scaled with the sample size n due to the column size of a full kernel Gram matrix K. This article considers the Nyström low-rank approximation. It uses a reduced kernel K̂, which is n×m, consisting of m columns (say columns i1,i2,⋯,im) randomly drawn from K. This approximation takes the form K≈K̂U−1K̂T, where U is the reduced m×m matrix formed by rows i1,i2,⋯,im of K̂. Often m is much smaller than the sample size n resulting in a thin rectangular reduced kernel, and it leads to learning algorithms scaled with the column size m. The quality of matrix approximations can be assessed by the closeness of their eigenvalues and eigenvectors. In this article, asymptotic error bounds on eigenvalues and eigenvectors are derived for the Nyström low-rank approximation matrix. •Many kernel-based learning algorithms have the computational load.•The Nyström low-rank approximation is designed for reducing the computation.•We propose the spectrum decomposition condition with a theoretical justification.•Asymptotic error bounds on eigenvalues and eigenvectors are derived.•Numerical experiments are provided for covariance kernel and Wishart matrix.
doi_str_mv 10.1016/j.jmva.2013.05.006
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_1399709786</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0047259X13000924</els_id><sourcerecordid>3018561361</sourcerecordid><originalsourceid>FETCH-LOGICAL-c372t-190dc64761d27fad5f5dfc341dacea6af72502dd1e081711475d7638e49ae6653</originalsourceid><addsrcrecordid>eNp9kM1KAzEUhYMoWKsv4GrA9Yw3mUnSATel-AdFERTchTS5A5m2kzGZVvtivoAvZkpduzp3cc65h4-QSwoFBSqu26Jdb3XBgJYF8AJAHJERhZrnklXlMRkBVDJnvH4_JWcxtgCUclmNyMs07tb94AdnMgzBh2zhN52NWZPOJYYOV_lCR7TZ0y4O4ed7na38Zx50t8x03wf_5dZ6cL7LkgRnMJ6Tk0avIl786Zi83d2-zh7y-fP942w6z00p2ZDTGqwRlRTUMtloyxtuG1NW1GqDWuhGMg7MWoowoZLSSnIrRTnBqtYoBC_H5OrQm0Z8bDAOqvWb0KWXipZ1LaGWE5Fc7OAywccYsFF9SIvDTlFQe3SqVXt0ao9OAVcJXQrdHEKY9m8dBhWNw86gdQHNoKx3_8V_AWf3eUc</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1399709786</pqid></control><display><type>article</type><title>Asymptotic error bounds for kernel-based Nyström low-rank approximation matrices</title><source>Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals</source><source>ScienceDirect Journals (5 years ago - present)</source><creator>Chang, Lo-Bin ; Bai, Zhidong ; Huang, Su-Yun ; Hwang, Chii-Ruey</creator><creatorcontrib>Chang, Lo-Bin ; Bai, Zhidong ; Huang, Su-Yun ; Hwang, Chii-Ruey</creatorcontrib><description>Many kernel-based learning algorithms have the computational load scaled with the sample size n due to the column size of a full kernel Gram matrix K. This article considers the Nyström low-rank approximation. It uses a reduced kernel K̂, which is n×m, consisting of m columns (say columns i1,i2,⋯,im) randomly drawn from K. This approximation takes the form K≈K̂U−1K̂T, where U is the reduced m×m matrix formed by rows i1,i2,⋯,im of K̂. Often m is much smaller than the sample size n resulting in a thin rectangular reduced kernel, and it leads to learning algorithms scaled with the column size m. The quality of matrix approximations can be assessed by the closeness of their eigenvalues and eigenvectors. In this article, asymptotic error bounds on eigenvalues and eigenvectors are derived for the Nyström low-rank approximation matrix. •Many kernel-based learning algorithms have the computational load.•The Nyström low-rank approximation is designed for reducing the computation.•We propose the spectrum decomposition condition with a theoretical justification.•Asymptotic error bounds on eigenvalues and eigenvectors are derived.•Numerical experiments are provided for covariance kernel and Wishart matrix.</description><identifier>ISSN: 0047-259X</identifier><identifier>EISSN: 1095-7243</identifier><identifier>DOI: 10.1016/j.jmva.2013.05.006</identifier><identifier>CODEN: JMVAAI</identifier><language>eng</language><publisher>New York: Elsevier Inc</publisher><subject>Algorithms ; Approximation ; Artificial intelligence ; Asymptotic error bound ; Asymptotic methods ; Eigenvalues ; Kernel Gram matrix ; Matrix ; Nyström approximation ; Sample size ; Spectrum decomposition ; Studies ; Wishart random matrix</subject><ispartof>Journal of multivariate analysis, 2013-09, Vol.120, p.102-119</ispartof><rights>2013 Elsevier Inc.</rights><rights>Copyright Taylor &amp; Francis Group Sep 2013</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c372t-190dc64761d27fad5f5dfc341dacea6af72502dd1e081711475d7638e49ae6653</citedby><cites>FETCH-LOGICAL-c372t-190dc64761d27fad5f5dfc341dacea6af72502dd1e081711475d7638e49ae6653</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://dx.doi.org/10.1016/j.jmva.2013.05.006$$EHTML$$P50$$Gelsevier$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,3550,27924,27925,45995</link.rule.ids></links><search><creatorcontrib>Chang, Lo-Bin</creatorcontrib><creatorcontrib>Bai, Zhidong</creatorcontrib><creatorcontrib>Huang, Su-Yun</creatorcontrib><creatorcontrib>Hwang, Chii-Ruey</creatorcontrib><title>Asymptotic error bounds for kernel-based Nyström low-rank approximation matrices</title><title>Journal of multivariate analysis</title><description>Many kernel-based learning algorithms have the computational load scaled with the sample size n due to the column size of a full kernel Gram matrix K. This article considers the Nyström low-rank approximation. It uses a reduced kernel K̂, which is n×m, consisting of m columns (say columns i1,i2,⋯,im) randomly drawn from K. This approximation takes the form K≈K̂U−1K̂T, where U is the reduced m×m matrix formed by rows i1,i2,⋯,im of K̂. Often m is much smaller than the sample size n resulting in a thin rectangular reduced kernel, and it leads to learning algorithms scaled with the column size m. The quality of matrix approximations can be assessed by the closeness of their eigenvalues and eigenvectors. In this article, asymptotic error bounds on eigenvalues and eigenvectors are derived for the Nyström low-rank approximation matrix. •Many kernel-based learning algorithms have the computational load.•The Nyström low-rank approximation is designed for reducing the computation.•We propose the spectrum decomposition condition with a theoretical justification.•Asymptotic error bounds on eigenvalues and eigenvectors are derived.•Numerical experiments are provided for covariance kernel and Wishart matrix.</description><subject>Algorithms</subject><subject>Approximation</subject><subject>Artificial intelligence</subject><subject>Asymptotic error bound</subject><subject>Asymptotic methods</subject><subject>Eigenvalues</subject><subject>Kernel Gram matrix</subject><subject>Matrix</subject><subject>Nyström approximation</subject><subject>Sample size</subject><subject>Spectrum decomposition</subject><subject>Studies</subject><subject>Wishart random matrix</subject><issn>0047-259X</issn><issn>1095-7243</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2013</creationdate><recordtype>article</recordtype><recordid>eNp9kM1KAzEUhYMoWKsv4GrA9Yw3mUnSATel-AdFERTchTS5A5m2kzGZVvtivoAvZkpduzp3cc65h4-QSwoFBSqu26Jdb3XBgJYF8AJAHJERhZrnklXlMRkBVDJnvH4_JWcxtgCUclmNyMs07tb94AdnMgzBh2zhN52NWZPOJYYOV_lCR7TZ0y4O4ed7na38Zx50t8x03wf_5dZ6cL7LkgRnMJ6Tk0avIl786Zi83d2-zh7y-fP942w6z00p2ZDTGqwRlRTUMtloyxtuG1NW1GqDWuhGMg7MWoowoZLSSnIrRTnBqtYoBC_H5OrQm0Z8bDAOqvWb0KWXipZ1LaGWE5Fc7OAywccYsFF9SIvDTlFQe3SqVXt0ao9OAVcJXQrdHEKY9m8dBhWNw86gdQHNoKx3_8V_AWf3eUc</recordid><startdate>201309</startdate><enddate>201309</enddate><creator>Chang, Lo-Bin</creator><creator>Bai, Zhidong</creator><creator>Huang, Su-Yun</creator><creator>Hwang, Chii-Ruey</creator><general>Elsevier Inc</general><general>Taylor &amp; Francis LLC</general><scope>6I.</scope><scope>AAFTH</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>JQ2</scope></search><sort><creationdate>201309</creationdate><title>Asymptotic error bounds for kernel-based Nyström low-rank approximation matrices</title><author>Chang, Lo-Bin ; Bai, Zhidong ; Huang, Su-Yun ; Hwang, Chii-Ruey</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c372t-190dc64761d27fad5f5dfc341dacea6af72502dd1e081711475d7638e49ae6653</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2013</creationdate><topic>Algorithms</topic><topic>Approximation</topic><topic>Artificial intelligence</topic><topic>Asymptotic error bound</topic><topic>Asymptotic methods</topic><topic>Eigenvalues</topic><topic>Kernel Gram matrix</topic><topic>Matrix</topic><topic>Nyström approximation</topic><topic>Sample size</topic><topic>Spectrum decomposition</topic><topic>Studies</topic><topic>Wishart random matrix</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Chang, Lo-Bin</creatorcontrib><creatorcontrib>Bai, Zhidong</creatorcontrib><creatorcontrib>Huang, Su-Yun</creatorcontrib><creatorcontrib>Hwang, Chii-Ruey</creatorcontrib><collection>ScienceDirect Open Access Titles</collection><collection>Elsevier:ScienceDirect:Open Access</collection><collection>CrossRef</collection><collection>ProQuest Computer Science Collection</collection><jtitle>Journal of multivariate analysis</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Chang, Lo-Bin</au><au>Bai, Zhidong</au><au>Huang, Su-Yun</au><au>Hwang, Chii-Ruey</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Asymptotic error bounds for kernel-based Nyström low-rank approximation matrices</atitle><jtitle>Journal of multivariate analysis</jtitle><date>2013-09</date><risdate>2013</risdate><volume>120</volume><spage>102</spage><epage>119</epage><pages>102-119</pages><issn>0047-259X</issn><eissn>1095-7243</eissn><coden>JMVAAI</coden><abstract>Many kernel-based learning algorithms have the computational load scaled with the sample size n due to the column size of a full kernel Gram matrix K. This article considers the Nyström low-rank approximation. It uses a reduced kernel K̂, which is n×m, consisting of m columns (say columns i1,i2,⋯,im) randomly drawn from K. This approximation takes the form K≈K̂U−1K̂T, where U is the reduced m×m matrix formed by rows i1,i2,⋯,im of K̂. Often m is much smaller than the sample size n resulting in a thin rectangular reduced kernel, and it leads to learning algorithms scaled with the column size m. The quality of matrix approximations can be assessed by the closeness of their eigenvalues and eigenvectors. In this article, asymptotic error bounds on eigenvalues and eigenvectors are derived for the Nyström low-rank approximation matrix. •Many kernel-based learning algorithms have the computational load.•The Nyström low-rank approximation is designed for reducing the computation.•We propose the spectrum decomposition condition with a theoretical justification.•Asymptotic error bounds on eigenvalues and eigenvectors are derived.•Numerical experiments are provided for covariance kernel and Wishart matrix.</abstract><cop>New York</cop><pub>Elsevier Inc</pub><doi>10.1016/j.jmva.2013.05.006</doi><tpages>18</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0047-259X
ispartof Journal of multivariate analysis, 2013-09, Vol.120, p.102-119
issn 0047-259X
1095-7243
language eng
recordid cdi_proquest_journals_1399709786
source Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals; ScienceDirect Journals (5 years ago - present)
subjects Algorithms
Approximation
Artificial intelligence
Asymptotic error bound
Asymptotic methods
Eigenvalues
Kernel Gram matrix
Matrix
Nyström approximation
Sample size
Spectrum decomposition
Studies
Wishart random matrix
title Asymptotic error bounds for kernel-based Nyström low-rank approximation matrices
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T18%3A17%3A01IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Asymptotic%20error%20bounds%20for%20kernel-based%20Nystr%C3%B6m%20low-rank%20approximation%20matrices&rft.jtitle=Journal%20of%20multivariate%20analysis&rft.au=Chang,%20Lo-Bin&rft.date=2013-09&rft.volume=120&rft.spage=102&rft.epage=119&rft.pages=102-119&rft.issn=0047-259X&rft.eissn=1095-7243&rft.coden=JMVAAI&rft_id=info:doi/10.1016/j.jmva.2013.05.006&rft_dat=%3Cproquest_cross%3E3018561361%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1399709786&rft_id=info:pmid/&rft_els_id=S0047259X13000924&rfr_iscdi=true