Distributed learning for sketched kernel regression

We study distributed learning for regularized least squares regression in a reproducing kernel Hilbert space (RKHS). The divide-and-conquer strategy is a frequently used approach for dealing with very large data sets, which computes an estimate on each subset and then takes an average of the estimat...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Neural networks 2021-11, Vol.143, p.368-376
Hauptverfasser: Lian, Heng, Liu, Jiamin, Fan, Zengyan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 376
container_issue
container_start_page 368
container_title Neural networks
container_volume 143
creator Lian, Heng
Liu, Jiamin
Fan, Zengyan
description We study distributed learning for regularized least squares regression in a reproducing kernel Hilbert space (RKHS). The divide-and-conquer strategy is a frequently used approach for dealing with very large data sets, which computes an estimate on each subset and then takes an average of the estimators. Existing theoretical constraint on the number of subsets implies the size of each subset can still be large. Random sketching can thus be used to produce the local estimators on each subset to further reduce the computation compared to vanilla divide-and-conquer. In this setting, sketching and divide-and-conquer are complementary to each other in dealing with the large sample size. We show that optimal learning rates can be retained. Simulations are performed to compare sketched and non-standard divide-and-conquer methods.
doi_str_mv 10.1016/j.neunet.2021.06.020
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_2548411819</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0893608021002525</els_id><sourcerecordid>2548411819</sourcerecordid><originalsourceid>FETCH-LOGICAL-c339t-91a27d59f9274f193c9d17d2c80110fae13557443ce9f28a0be1c90c7448b7b23</originalsourceid><addsrcrecordid>eNp9kEtLxDAUhYMoOI7-Axddumm9N-kj2QgyPmHAja5Dm96MmemkY9IK_ns71LWrC4d7PjgfY9cIGQKWt9vM0-hpyDhwzKDMgMMJW6CsVMoryU_ZAqQSaQkSztlFjFsAKGUuFkw8uDgE14wDtUlHdfDObxLbhyTuaDCfU7qj4KlLAm0Cxeh6f8nObN1Fuvq7S_bx9Pi-eknXb8-vq_t1aoRQQ6qw5lVbKKt4lVtUwqgWq5YbCYhga0JRFFWeC0PKcllDQ2gUmCmSTdVwsWQ3M_cQ-q-R4qD3LhrqutpTP0bNi1zmiHJCL1k-v5rQxxjI6kNw-zr8aAR9dKS3enakj440lHpyNNXu5hpNM74dBR2NI2-odYHMoNve_Q_4BV6ocQM</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2548411819</pqid></control><display><type>article</type><title>Distributed learning for sketched kernel regression</title><source>Elsevier ScienceDirect Journals</source><creator>Lian, Heng ; Liu, Jiamin ; Fan, Zengyan</creator><creatorcontrib>Lian, Heng ; Liu, Jiamin ; Fan, Zengyan</creatorcontrib><description>We study distributed learning for regularized least squares regression in a reproducing kernel Hilbert space (RKHS). The divide-and-conquer strategy is a frequently used approach for dealing with very large data sets, which computes an estimate on each subset and then takes an average of the estimators. Existing theoretical constraint on the number of subsets implies the size of each subset can still be large. Random sketching can thus be used to produce the local estimators on each subset to further reduce the computation compared to vanilla divide-and-conquer. In this setting, sketching and divide-and-conquer are complementary to each other in dealing with the large sample size. We show that optimal learning rates can be retained. Simulations are performed to compare sketched and non-standard divide-and-conquer methods.</description><identifier>ISSN: 0893-6080</identifier><identifier>EISSN: 1879-2782</identifier><identifier>DOI: 10.1016/j.neunet.2021.06.020</identifier><language>eng</language><publisher>Elsevier Ltd</publisher><subject>Distributed learning ; Kernel method ; Optimal rate ; Randomized sketches</subject><ispartof>Neural networks, 2021-11, Vol.143, p.368-376</ispartof><rights>2021 Elsevier Ltd</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c339t-91a27d59f9274f193c9d17d2c80110fae13557443ce9f28a0be1c90c7448b7b23</citedby><cites>FETCH-LOGICAL-c339t-91a27d59f9274f193c9d17d2c80110fae13557443ce9f28a0be1c90c7448b7b23</cites><orcidid>0000-0002-6008-6614 ; 0000-0002-3114-6120 ; 0000-0002-0951-0045</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.sciencedirect.com/science/article/pii/S0893608021002525$$EHTML$$P50$$Gelsevier$$H</linktohtml><link.rule.ids>314,776,780,3537,27901,27902,65306</link.rule.ids></links><search><creatorcontrib>Lian, Heng</creatorcontrib><creatorcontrib>Liu, Jiamin</creatorcontrib><creatorcontrib>Fan, Zengyan</creatorcontrib><title>Distributed learning for sketched kernel regression</title><title>Neural networks</title><description>We study distributed learning for regularized least squares regression in a reproducing kernel Hilbert space (RKHS). The divide-and-conquer strategy is a frequently used approach for dealing with very large data sets, which computes an estimate on each subset and then takes an average of the estimators. Existing theoretical constraint on the number of subsets implies the size of each subset can still be large. Random sketching can thus be used to produce the local estimators on each subset to further reduce the computation compared to vanilla divide-and-conquer. In this setting, sketching and divide-and-conquer are complementary to each other in dealing with the large sample size. We show that optimal learning rates can be retained. Simulations are performed to compare sketched and non-standard divide-and-conquer methods.</description><subject>Distributed learning</subject><subject>Kernel method</subject><subject>Optimal rate</subject><subject>Randomized sketches</subject><issn>0893-6080</issn><issn>1879-2782</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><recordid>eNp9kEtLxDAUhYMoOI7-Axddumm9N-kj2QgyPmHAja5Dm96MmemkY9IK_ns71LWrC4d7PjgfY9cIGQKWt9vM0-hpyDhwzKDMgMMJW6CsVMoryU_ZAqQSaQkSztlFjFsAKGUuFkw8uDgE14wDtUlHdfDObxLbhyTuaDCfU7qj4KlLAm0Cxeh6f8nObN1Fuvq7S_bx9Pi-eknXb8-vq_t1aoRQQ6qw5lVbKKt4lVtUwqgWq5YbCYhga0JRFFWeC0PKcllDQ2gUmCmSTdVwsWQ3M_cQ-q-R4qD3LhrqutpTP0bNi1zmiHJCL1k-v5rQxxjI6kNw-zr8aAR9dKS3enakj440lHpyNNXu5hpNM74dBR2NI2-odYHMoNve_Q_4BV6ocQM</recordid><startdate>202111</startdate><enddate>202111</enddate><creator>Lian, Heng</creator><creator>Liu, Jiamin</creator><creator>Fan, Zengyan</creator><general>Elsevier Ltd</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-6008-6614</orcidid><orcidid>https://orcid.org/0000-0002-3114-6120</orcidid><orcidid>https://orcid.org/0000-0002-0951-0045</orcidid></search><sort><creationdate>202111</creationdate><title>Distributed learning for sketched kernel regression</title><author>Lian, Heng ; Liu, Jiamin ; Fan, Zengyan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c339t-91a27d59f9274f193c9d17d2c80110fae13557443ce9f28a0be1c90c7448b7b23</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Distributed learning</topic><topic>Kernel method</topic><topic>Optimal rate</topic><topic>Randomized sketches</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Lian, Heng</creatorcontrib><creatorcontrib>Liu, Jiamin</creatorcontrib><creatorcontrib>Fan, Zengyan</creatorcontrib><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>Neural networks</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Lian, Heng</au><au>Liu, Jiamin</au><au>Fan, Zengyan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Distributed learning for sketched kernel regression</atitle><jtitle>Neural networks</jtitle><date>2021-11</date><risdate>2021</risdate><volume>143</volume><spage>368</spage><epage>376</epage><pages>368-376</pages><issn>0893-6080</issn><eissn>1879-2782</eissn><abstract>We study distributed learning for regularized least squares regression in a reproducing kernel Hilbert space (RKHS). The divide-and-conquer strategy is a frequently used approach for dealing with very large data sets, which computes an estimate on each subset and then takes an average of the estimators. Existing theoretical constraint on the number of subsets implies the size of each subset can still be large. Random sketching can thus be used to produce the local estimators on each subset to further reduce the computation compared to vanilla divide-and-conquer. In this setting, sketching and divide-and-conquer are complementary to each other in dealing with the large sample size. We show that optimal learning rates can be retained. Simulations are performed to compare sketched and non-standard divide-and-conquer methods.</abstract><pub>Elsevier Ltd</pub><doi>10.1016/j.neunet.2021.06.020</doi><tpages>9</tpages><orcidid>https://orcid.org/0000-0002-6008-6614</orcidid><orcidid>https://orcid.org/0000-0002-3114-6120</orcidid><orcidid>https://orcid.org/0000-0002-0951-0045</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0893-6080
ispartof Neural networks, 2021-11, Vol.143, p.368-376
issn 0893-6080
1879-2782
language eng
recordid cdi_proquest_miscellaneous_2548411819
source Elsevier ScienceDirect Journals
subjects Distributed learning
Kernel method
Optimal rate
Randomized sketches
title Distributed learning for sketched kernel regression
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-07T01%3A16%3A54IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Distributed%20learning%20for%20sketched%20kernel%20regression&rft.jtitle=Neural%20networks&rft.au=Lian,%20Heng&rft.date=2021-11&rft.volume=143&rft.spage=368&rft.epage=376&rft.pages=368-376&rft.issn=0893-6080&rft.eissn=1879-2782&rft_id=info:doi/10.1016/j.neunet.2021.06.020&rft_dat=%3Cproquest_cross%3E2548411819%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2548411819&rft_id=info:pmid/&rft_els_id=S0893608021002525&rfr_iscdi=true