Kernel methods through the roof: handling billions of points efficiently

Kernel methods provide an elegant and principled approach to nonparametric learning, but so far could hardly be used in large scale problems, since na\"ive implementations scale poorly with data size. Recent advances have shown the benefits of a number of algorithmic ideas, for example combinin...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2020-11
Hauptverfasser: Meanti, Giacomo, Carratino, Luigi, Rosasco, Lorenzo, Alessandro Rudi
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Meanti, Giacomo
Carratino, Luigi
Rosasco, Lorenzo
Alessandro Rudi
description Kernel methods provide an elegant and principled approach to nonparametric learning, but so far could hardly be used in large scale problems, since na\"ive implementations scale poorly with data size. Recent advances have shown the benefits of a number of algorithmic ideas, for example combining optimization, numerical linear algebra and random projections. Here, we push these efforts further to develop and test a solver that takes full advantage of GPU hardware. Towards this end, we designed a preconditioned gradient solver for kernel methods exploiting both GPU acceleration and parallelization with multiple GPUs, implementing out-of-core variants of common linear algebra operations to guarantee optimal hardware utilization. Further, we optimize the numerical precision of different operations and maximize efficiency of matrix-vector multiplications. As a result we can experimentally show dramatic speedups on datasets with billions of points, while still guaranteeing state of the art performance. Additionally, we make our software available as an easy to use library.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2414910279</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2414910279</sourcerecordid><originalsourceid>FETCH-proquest_journals_24149102793</originalsourceid><addsrcrecordid>eNqNyrEOgjAUQNHGxESi_MNLnElKCyKuRkPi6k5QXmlJ7cO2DP69DH6A0xnuXbFESJlnx0KIDUtDGDnn4lCJspQJa27oHVp4YdTUB4ja0zzoRQRPpE6gO9db4wZ4GGsNuQCkYCLjYgBUyjwNumg_O7ZWnQ2Y_tyy_fVyPzfZ5Ok9Y4jtSLN3S2pFkRd1zkVVy_-uL24dPGc</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2414910279</pqid></control><display><type>article</type><title>Kernel methods through the roof: handling billions of points efficiently</title><source>Free E- Journals</source><creator>Meanti, Giacomo ; Carratino, Luigi ; Rosasco, Lorenzo ; Alessandro Rudi</creator><creatorcontrib>Meanti, Giacomo ; Carratino, Luigi ; Rosasco, Lorenzo ; Alessandro Rudi</creatorcontrib><description>Kernel methods provide an elegant and principled approach to nonparametric learning, but so far could hardly be used in large scale problems, since na\"ive implementations scale poorly with data size. Recent advances have shown the benefits of a number of algorithmic ideas, for example combining optimization, numerical linear algebra and random projections. Here, we push these efforts further to develop and test a solver that takes full advantage of GPU hardware. Towards this end, we designed a preconditioned gradient solver for kernel methods exploiting both GPU acceleration and parallelization with multiple GPUs, implementing out-of-core variants of common linear algebra operations to guarantee optimal hardware utilization. Further, we optimize the numerical precision of different operations and maximize efficiency of matrix-vector multiplications. As a result we can experimentally show dramatic speedups on datasets with billions of points, while still guaranteeing state of the art performance. Additionally, we make our software available as an easy to use library.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Hardware ; Kernels ; Linear algebra ; Mathematical analysis ; Matrix algebra ; Matrix methods ; Optimization ; Parallel processing</subject><ispartof>arXiv.org, 2020-11</ispartof><rights>2020. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Meanti, Giacomo</creatorcontrib><creatorcontrib>Carratino, Luigi</creatorcontrib><creatorcontrib>Rosasco, Lorenzo</creatorcontrib><creatorcontrib>Alessandro Rudi</creatorcontrib><title>Kernel methods through the roof: handling billions of points efficiently</title><title>arXiv.org</title><description>Kernel methods provide an elegant and principled approach to nonparametric learning, but so far could hardly be used in large scale problems, since na\"ive implementations scale poorly with data size. Recent advances have shown the benefits of a number of algorithmic ideas, for example combining optimization, numerical linear algebra and random projections. Here, we push these efforts further to develop and test a solver that takes full advantage of GPU hardware. Towards this end, we designed a preconditioned gradient solver for kernel methods exploiting both GPU acceleration and parallelization with multiple GPUs, implementing out-of-core variants of common linear algebra operations to guarantee optimal hardware utilization. Further, we optimize the numerical precision of different operations and maximize efficiency of matrix-vector multiplications. As a result we can experimentally show dramatic speedups on datasets with billions of points, while still guaranteeing state of the art performance. Additionally, we make our software available as an easy to use library.</description><subject>Hardware</subject><subject>Kernels</subject><subject>Linear algebra</subject><subject>Mathematical analysis</subject><subject>Matrix algebra</subject><subject>Matrix methods</subject><subject>Optimization</subject><subject>Parallel processing</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNyrEOgjAUQNHGxESi_MNLnElKCyKuRkPi6k5QXmlJ7cO2DP69DH6A0xnuXbFESJlnx0KIDUtDGDnn4lCJspQJa27oHVp4YdTUB4ja0zzoRQRPpE6gO9db4wZ4GGsNuQCkYCLjYgBUyjwNumg_O7ZWnQ2Y_tyy_fVyPzfZ5Ok9Y4jtSLN3S2pFkRd1zkVVy_-uL24dPGc</recordid><startdate>20201126</startdate><enddate>20201126</enddate><creator>Meanti, Giacomo</creator><creator>Carratino, Luigi</creator><creator>Rosasco, Lorenzo</creator><creator>Alessandro Rudi</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20201126</creationdate><title>Kernel methods through the roof: handling billions of points efficiently</title><author>Meanti, Giacomo ; Carratino, Luigi ; Rosasco, Lorenzo ; Alessandro Rudi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_24149102793</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Hardware</topic><topic>Kernels</topic><topic>Linear algebra</topic><topic>Mathematical analysis</topic><topic>Matrix algebra</topic><topic>Matrix methods</topic><topic>Optimization</topic><topic>Parallel processing</topic><toplevel>online_resources</toplevel><creatorcontrib>Meanti, Giacomo</creatorcontrib><creatorcontrib>Carratino, Luigi</creatorcontrib><creatorcontrib>Rosasco, Lorenzo</creatorcontrib><creatorcontrib>Alessandro Rudi</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Meanti, Giacomo</au><au>Carratino, Luigi</au><au>Rosasco, Lorenzo</au><au>Alessandro Rudi</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Kernel methods through the roof: handling billions of points efficiently</atitle><jtitle>arXiv.org</jtitle><date>2020-11-26</date><risdate>2020</risdate><eissn>2331-8422</eissn><abstract>Kernel methods provide an elegant and principled approach to nonparametric learning, but so far could hardly be used in large scale problems, since na\"ive implementations scale poorly with data size. Recent advances have shown the benefits of a number of algorithmic ideas, for example combining optimization, numerical linear algebra and random projections. Here, we push these efforts further to develop and test a solver that takes full advantage of GPU hardware. Towards this end, we designed a preconditioned gradient solver for kernel methods exploiting both GPU acceleration and parallelization with multiple GPUs, implementing out-of-core variants of common linear algebra operations to guarantee optimal hardware utilization. Further, we optimize the numerical precision of different operations and maximize efficiency of matrix-vector multiplications. As a result we can experimentally show dramatic speedups on datasets with billions of points, while still guaranteeing state of the art performance. Additionally, we make our software available as an easy to use library.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2020-11
issn 2331-8422
language eng
recordid cdi_proquest_journals_2414910279
source Free E- Journals
subjects Hardware
Kernels
Linear algebra
Mathematical analysis
Matrix algebra
Matrix methods
Optimization
Parallel processing
title Kernel methods through the roof: handling billions of points efficiently
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-10T05%3A44%3A15IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Kernel%20methods%20through%20the%20roof:%20handling%20billions%20of%20points%20efficiently&rft.jtitle=arXiv.org&rft.au=Meanti,%20Giacomo&rft.date=2020-11-26&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2414910279%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2414910279&rft_id=info:pmid/&rfr_iscdi=true