Precision aggregated local models

Large‐scale Gaussian process (GP) regression is infeasible for large training data due to cubic scaling of flops and quadratic storage involved in working with covariance matrices. Remedies in recent literature focus on divide‐and‐conquer, for example, partitioning into subproblems and inducing func...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Statistical analysis and data mining 2021-12, Vol.14 (6), p.676-697
Hauptverfasser: Edwards, Adam M., Gramacy, Robert B.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 697
container_issue 6
container_start_page 676
container_title Statistical analysis and data mining
container_volume 14
creator Edwards, Adam M.
Gramacy, Robert B.
description Large‐scale Gaussian process (GP) regression is infeasible for large training data due to cubic scaling of flops and quadratic storage involved in working with covariance matrices. Remedies in recent literature focus on divide‐and‐conquer, for example, partitioning into subproblems and inducing functional (and thus computational) independence. Such approximations can be speedy, accurate, and sometimes even more flexible than ordinary GPs. However, a big downside is loss of continuity at partition boundaries. Modern methods like local approximate GPs (LAGPs) imply effectively infinite partitioning and are thus both good and bad in this regard. Model averaging, an alternative to divide‐and‐conquer, can maintain absolute continuity but often over‐smooths, diminishing accuracy. Here we propose putting LAGP‐like methods into a local experts‐like framework, blending partition‐based speed with model‐averaging continuity, as a flagship example of what we call precision aggregated local models (PALM). Using K LAGPs, each selecting n from N total data pairs, our scheme is at most cubic in n, quadratic in K, and linear in N. Extensive empirical illustration shows how PALM is at least as accurate as LAGP, can be much faster, and furnishes continuous predictions. Finally, we propose sequential updating scheme that greedily refines a PALM predictor up to a computational budget.
doi_str_mv 10.1002/sam.11547
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2595889581</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2595889581</sourcerecordid><originalsourceid>FETCH-LOGICAL-c3327-587b1dc39cb1173ae38b4cf256e13c4d05aab5222264a32ebe11a74911ef9a083</originalsourceid><addsrcrecordid>eNp1kE1LAzEQhoMoWKsH_8GKJw_bZvKxyR5L8QsqCuo5ZLOzy5bdpiYt0n9vdMWbA8O8h2dm4CHkEugMKGXzaIcZgBTqiEyg5CwHrdjxXy7EKTmLcU2pLCiICbl6Cei62PlNZts2YGt3WGe9d7bPBl9jH8_JSWP7iBe_c0re727flg_56vn-cblY5Y5zpnKpVQW146WrABS3yHUlXMNkgcCdqKm0tpIsVSEsZ1ghgFWiBMCmtFTzKbke726D_9hj3Jm134dNemmYLKXWqSFRNyPlgo8xYGO2oRtsOBig5tuASQbMj4HEzkf2s-vx8D9oXhdP48YXKIZavQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2595889581</pqid></control><display><type>article</type><title>Precision aggregated local models</title><source>Wiley Online Library - AutoHoldings Journals</source><creator>Edwards, Adam M. ; Gramacy, Robert B.</creator><creatorcontrib>Edwards, Adam M. ; Gramacy, Robert B.</creatorcontrib><description>Large‐scale Gaussian process (GP) regression is infeasible for large training data due to cubic scaling of flops and quadratic storage involved in working with covariance matrices. Remedies in recent literature focus on divide‐and‐conquer, for example, partitioning into subproblems and inducing functional (and thus computational) independence. Such approximations can be speedy, accurate, and sometimes even more flexible than ordinary GPs. However, a big downside is loss of continuity at partition boundaries. Modern methods like local approximate GPs (LAGPs) imply effectively infinite partitioning and are thus both good and bad in this regard. Model averaging, an alternative to divide‐and‐conquer, can maintain absolute continuity but often over‐smooths, diminishing accuracy. Here we propose putting LAGP‐like methods into a local experts‐like framework, blending partition‐based speed with model‐averaging continuity, as a flagship example of what we call precision aggregated local models (PALM). Using K LAGPs, each selecting n from N total data pairs, our scheme is at most cubic in n, quadratic in K, and linear in N. Extensive empirical illustration shows how PALM is at least as accurate as LAGP, can be much faster, and furnishes continuous predictions. Finally, we propose sequential updating scheme that greedily refines a PALM predictor up to a computational budget.</description><identifier>ISSN: 1932-1864</identifier><identifier>EISSN: 1932-1872</identifier><identifier>DOI: 10.1002/sam.11547</identifier><language>eng</language><publisher>Hoboken: Wiley Subscription Services, Inc., A Wiley Company</publisher><subject>active learning ; approximate kriging neighborhoods ; Approximation ; boosting ; Continuity (mathematics) ; Covariance matrix ; Gaussian process ; Gaussian process surrogate ; Mathematical models ; nearest neighbor ; nonparametric regression ; Nonparametric statistics ; Partitioning ; sequential design</subject><ispartof>Statistical analysis and data mining, 2021-12, Vol.14 (6), p.676-697</ispartof><rights>2021 Wiley Periodicals LLC.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c3327-587b1dc39cb1173ae38b4cf256e13c4d05aab5222264a32ebe11a74911ef9a083</citedby><cites>FETCH-LOGICAL-c3327-587b1dc39cb1173ae38b4cf256e13c4d05aab5222264a32ebe11a74911ef9a083</cites><orcidid>0000-0003-3371-8791</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://onlinelibrary.wiley.com/doi/pdf/10.1002%2Fsam.11547$$EPDF$$P50$$Gwiley$$H</linktopdf><linktohtml>$$Uhttps://onlinelibrary.wiley.com/doi/full/10.1002%2Fsam.11547$$EHTML$$P50$$Gwiley$$H</linktohtml><link.rule.ids>314,776,780,1411,27901,27902,45550,45551</link.rule.ids></links><search><creatorcontrib>Edwards, Adam M.</creatorcontrib><creatorcontrib>Gramacy, Robert B.</creatorcontrib><title>Precision aggregated local models</title><title>Statistical analysis and data mining</title><description>Large‐scale Gaussian process (GP) regression is infeasible for large training data due to cubic scaling of flops and quadratic storage involved in working with covariance matrices. Remedies in recent literature focus on divide‐and‐conquer, for example, partitioning into subproblems and inducing functional (and thus computational) independence. Such approximations can be speedy, accurate, and sometimes even more flexible than ordinary GPs. However, a big downside is loss of continuity at partition boundaries. Modern methods like local approximate GPs (LAGPs) imply effectively infinite partitioning and are thus both good and bad in this regard. Model averaging, an alternative to divide‐and‐conquer, can maintain absolute continuity but often over‐smooths, diminishing accuracy. Here we propose putting LAGP‐like methods into a local experts‐like framework, blending partition‐based speed with model‐averaging continuity, as a flagship example of what we call precision aggregated local models (PALM). Using K LAGPs, each selecting n from N total data pairs, our scheme is at most cubic in n, quadratic in K, and linear in N. Extensive empirical illustration shows how PALM is at least as accurate as LAGP, can be much faster, and furnishes continuous predictions. Finally, we propose sequential updating scheme that greedily refines a PALM predictor up to a computational budget.</description><subject>active learning</subject><subject>approximate kriging neighborhoods</subject><subject>Approximation</subject><subject>boosting</subject><subject>Continuity (mathematics)</subject><subject>Covariance matrix</subject><subject>Gaussian process</subject><subject>Gaussian process surrogate</subject><subject>Mathematical models</subject><subject>nearest neighbor</subject><subject>nonparametric regression</subject><subject>Nonparametric statistics</subject><subject>Partitioning</subject><subject>sequential design</subject><issn>1932-1864</issn><issn>1932-1872</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><recordid>eNp1kE1LAzEQhoMoWKsH_8GKJw_bZvKxyR5L8QsqCuo5ZLOzy5bdpiYt0n9vdMWbA8O8h2dm4CHkEugMKGXzaIcZgBTqiEyg5CwHrdjxXy7EKTmLcU2pLCiICbl6Cei62PlNZts2YGt3WGe9d7bPBl9jH8_JSWP7iBe_c0re727flg_56vn-cblY5Y5zpnKpVQW146WrABS3yHUlXMNkgcCdqKm0tpIsVSEsZ1ghgFWiBMCmtFTzKbke726D_9hj3Jm134dNemmYLKXWqSFRNyPlgo8xYGO2oRtsOBig5tuASQbMj4HEzkf2s-vx8D9oXhdP48YXKIZavQ</recordid><startdate>202112</startdate><enddate>202112</enddate><creator>Edwards, Adam M.</creator><creator>Gramacy, Robert B.</creator><general>Wiley Subscription Services, Inc., A Wiley Company</general><general>Wiley Subscription Services, Inc</general><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0003-3371-8791</orcidid></search><sort><creationdate>202112</creationdate><title>Precision aggregated local models</title><author>Edwards, Adam M. ; Gramacy, Robert B.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c3327-587b1dc39cb1173ae38b4cf256e13c4d05aab5222264a32ebe11a74911ef9a083</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>active learning</topic><topic>approximate kriging neighborhoods</topic><topic>Approximation</topic><topic>boosting</topic><topic>Continuity (mathematics)</topic><topic>Covariance matrix</topic><topic>Gaussian process</topic><topic>Gaussian process surrogate</topic><topic>Mathematical models</topic><topic>nearest neighbor</topic><topic>nonparametric regression</topic><topic>Nonparametric statistics</topic><topic>Partitioning</topic><topic>sequential design</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Edwards, Adam M.</creatorcontrib><creatorcontrib>Gramacy, Robert B.</creatorcontrib><collection>CrossRef</collection><jtitle>Statistical analysis and data mining</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Edwards, Adam M.</au><au>Gramacy, Robert B.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Precision aggregated local models</atitle><jtitle>Statistical analysis and data mining</jtitle><date>2021-12</date><risdate>2021</risdate><volume>14</volume><issue>6</issue><spage>676</spage><epage>697</epage><pages>676-697</pages><issn>1932-1864</issn><eissn>1932-1872</eissn><abstract>Large‐scale Gaussian process (GP) regression is infeasible for large training data due to cubic scaling of flops and quadratic storage involved in working with covariance matrices. Remedies in recent literature focus on divide‐and‐conquer, for example, partitioning into subproblems and inducing functional (and thus computational) independence. Such approximations can be speedy, accurate, and sometimes even more flexible than ordinary GPs. However, a big downside is loss of continuity at partition boundaries. Modern methods like local approximate GPs (LAGPs) imply effectively infinite partitioning and are thus both good and bad in this regard. Model averaging, an alternative to divide‐and‐conquer, can maintain absolute continuity but often over‐smooths, diminishing accuracy. Here we propose putting LAGP‐like methods into a local experts‐like framework, blending partition‐based speed with model‐averaging continuity, as a flagship example of what we call precision aggregated local models (PALM). Using K LAGPs, each selecting n from N total data pairs, our scheme is at most cubic in n, quadratic in K, and linear in N. Extensive empirical illustration shows how PALM is at least as accurate as LAGP, can be much faster, and furnishes continuous predictions. Finally, we propose sequential updating scheme that greedily refines a PALM predictor up to a computational budget.</abstract><cop>Hoboken</cop><pub>Wiley Subscription Services, Inc., A Wiley Company</pub><doi>10.1002/sam.11547</doi><tpages>22</tpages><orcidid>https://orcid.org/0000-0003-3371-8791</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1932-1864
ispartof Statistical analysis and data mining, 2021-12, Vol.14 (6), p.676-697
issn 1932-1864
1932-1872
language eng
recordid cdi_proquest_journals_2595889581
source Wiley Online Library - AutoHoldings Journals
subjects active learning
approximate kriging neighborhoods
Approximation
boosting
Continuity (mathematics)
Covariance matrix
Gaussian process
Gaussian process surrogate
Mathematical models
nearest neighbor
nonparametric regression
Nonparametric statistics
Partitioning
sequential design
title Precision aggregated local models
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-29T09%3A48%3A31IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Precision%20aggregated%20local%20models&rft.jtitle=Statistical%20analysis%20and%20data%20mining&rft.au=Edwards,%20Adam%20M.&rft.date=2021-12&rft.volume=14&rft.issue=6&rft.spage=676&rft.epage=697&rft.pages=676-697&rft.issn=1932-1864&rft.eissn=1932-1872&rft_id=info:doi/10.1002/sam.11547&rft_dat=%3Cproquest_cross%3E2595889581%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2595889581&rft_id=info:pmid/&rfr_iscdi=true