Sublinear Optimization for Machine Learning

In this article we describe and analyze sublinear-time approximation algorithms for some optimization problems arising in machine learning, such as training linear classifiers and finding minimum enclosing balls. Our algorithms can be extended to some kernelized versions of these problems, such as S...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of the ACM 2012-10, Vol.59 (5), p.1-49
Hauptverfasser: CLARKSON, Kenneth L, HAZAN, Elad, WOODRUFF, David P
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 49
container_issue 5
container_start_page 1
container_title Journal of the ACM
container_volume 59
creator CLARKSON, Kenneth L
HAZAN, Elad
WOODRUFF, David P
description In this article we describe and analyze sublinear-time approximation algorithms for some optimization problems arising in machine learning, such as training linear classifiers and finding minimum enclosing balls. Our algorithms can be extended to some kernelized versions of these problems, such as SVDD, hard margin SVM, and L 2 -SVM, for which sublinear-time algorithms were not known before. These new algorithms use a combination of a novel sampling techniques and a new multiplicative update algorithm. We give lower bounds which show the running times of many of our algorithms to be nearly best possible in the unit-cost RAM model.
doi_str_mv 10.1145/2371656.2371658
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_1506361775</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2818926901</sourcerecordid><originalsourceid>FETCH-LOGICAL-c332t-6d11601e123c8defeb2e00a5e94863cba0d5ff3180bc8e2e52c1a5575b6ddd8e3</originalsourceid><addsrcrecordid>eNpdkM1LxDAQxYMouK6evRZEEKS7M0nz4VGW9QNW9qCCt5KmqWbppmvSHvSvN7LFgzDwGN7vDcMj5BxhhljwOWUSBRezvaoDMkHOZS4ZfzskEwAocl4gHpOTGDdpBQpyQq6fh6p13uqQrXe927pv3bvOZ00XsidtPpKVrZLrnX8_JUeNbqM9G3VKXu-WL4uHfLW-f1zcrnLDGO1zUSMKQIuUGVXbxlbUAmhubwolmKk01LxpGCqojLLUcmpQp1d5Jeq6VpZNydX-7i50n4ONfbl10di21d52QyyRg2ACpeQJvfiHbroh-PRdiSghQWkSNd9TJnQxBtuUu-C2OnyVCOVveeVY3qgqJS7Huzoa3TZBe-PiX4wKCSiRsR_4mmyD</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1170617617</pqid></control><display><type>article</type><title>Sublinear Optimization for Machine Learning</title><source>ACM Digital Library Complete</source><creator>CLARKSON, Kenneth L ; HAZAN, Elad ; WOODRUFF, David P</creator><creatorcontrib>CLARKSON, Kenneth L ; HAZAN, Elad ; WOODRUFF, David P</creatorcontrib><description>In this article we describe and analyze sublinear-time approximation algorithms for some optimization problems arising in machine learning, such as training linear classifiers and finding minimum enclosing balls. Our algorithms can be extended to some kernelized versions of these problems, such as SVDD, hard margin SVM, and L 2 -SVM, for which sublinear-time algorithms were not known before. These new algorithms use a combination of a novel sampling techniques and a new multiplicative update algorithm. We give lower bounds which show the running times of many of our algorithms to be nearly best possible in the unit-cost RAM model.</description><identifier>ISSN: 0004-5411</identifier><identifier>EISSN: 1557-735X</identifier><identifier>DOI: 10.1145/2371656.2371658</identifier><identifier>CODEN: JACOAH</identifier><language>eng</language><publisher>New York, NY: Association for Computing Machinery</publisher><subject>Algorithmics. Computability. Computer arithmetics ; Algorithms ; Applied sciences ; Artificial intelligence ; Branch &amp; bound algorithms ; Computer science; control theory; systems ; Exact sciences and technology ; Linear programming ; Machine learning ; Optimization algorithms ; Studies ; Theoretical computing</subject><ispartof>Journal of the ACM, 2012-10, Vol.59 (5), p.1-49</ispartof><rights>2014 INIST-CNRS</rights><rights>Copyright Association for Computing Machinery Oct 2012</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c332t-6d11601e123c8defeb2e00a5e94863cba0d5ff3180bc8e2e52c1a5575b6ddd8e3</citedby><cites>FETCH-LOGICAL-c332t-6d11601e123c8defeb2e00a5e94863cba0d5ff3180bc8e2e52c1a5575b6ddd8e3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids><backlink>$$Uhttp://pascal-francis.inist.fr/vibad/index.php?action=getRecordDetail&amp;idt=26701713$$DView record in Pascal Francis$$Hfree_for_read</backlink></links><search><creatorcontrib>CLARKSON, Kenneth L</creatorcontrib><creatorcontrib>HAZAN, Elad</creatorcontrib><creatorcontrib>WOODRUFF, David P</creatorcontrib><title>Sublinear Optimization for Machine Learning</title><title>Journal of the ACM</title><description>In this article we describe and analyze sublinear-time approximation algorithms for some optimization problems arising in machine learning, such as training linear classifiers and finding minimum enclosing balls. Our algorithms can be extended to some kernelized versions of these problems, such as SVDD, hard margin SVM, and L 2 -SVM, for which sublinear-time algorithms were not known before. These new algorithms use a combination of a novel sampling techniques and a new multiplicative update algorithm. We give lower bounds which show the running times of many of our algorithms to be nearly best possible in the unit-cost RAM model.</description><subject>Algorithmics. Computability. Computer arithmetics</subject><subject>Algorithms</subject><subject>Applied sciences</subject><subject>Artificial intelligence</subject><subject>Branch &amp; bound algorithms</subject><subject>Computer science; control theory; systems</subject><subject>Exact sciences and technology</subject><subject>Linear programming</subject><subject>Machine learning</subject><subject>Optimization algorithms</subject><subject>Studies</subject><subject>Theoretical computing</subject><issn>0004-5411</issn><issn>1557-735X</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2012</creationdate><recordtype>article</recordtype><recordid>eNpdkM1LxDAQxYMouK6evRZEEKS7M0nz4VGW9QNW9qCCt5KmqWbppmvSHvSvN7LFgzDwGN7vDcMj5BxhhljwOWUSBRezvaoDMkHOZS4ZfzskEwAocl4gHpOTGDdpBQpyQq6fh6p13uqQrXe927pv3bvOZ00XsidtPpKVrZLrnX8_JUeNbqM9G3VKXu-WL4uHfLW-f1zcrnLDGO1zUSMKQIuUGVXbxlbUAmhubwolmKk01LxpGCqojLLUcmpQp1d5Jeq6VpZNydX-7i50n4ONfbl10di21d52QyyRg2ACpeQJvfiHbroh-PRdiSghQWkSNd9TJnQxBtuUu-C2OnyVCOVveeVY3qgqJS7Huzoa3TZBe-PiX4wKCSiRsR_4mmyD</recordid><startdate>20121001</startdate><enddate>20121001</enddate><creator>CLARKSON, Kenneth L</creator><creator>HAZAN, Elad</creator><creator>WOODRUFF, David P</creator><general>Association for Computing Machinery</general><scope>IQODW</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7TB</scope><scope>FR3</scope></search><sort><creationdate>20121001</creationdate><title>Sublinear Optimization for Machine Learning</title><author>CLARKSON, Kenneth L ; HAZAN, Elad ; WOODRUFF, David P</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c332t-6d11601e123c8defeb2e00a5e94863cba0d5ff3180bc8e2e52c1a5575b6ddd8e3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2012</creationdate><topic>Algorithmics. Computability. Computer arithmetics</topic><topic>Algorithms</topic><topic>Applied sciences</topic><topic>Artificial intelligence</topic><topic>Branch &amp; bound algorithms</topic><topic>Computer science; control theory; systems</topic><topic>Exact sciences and technology</topic><topic>Linear programming</topic><topic>Machine learning</topic><topic>Optimization algorithms</topic><topic>Studies</topic><topic>Theoretical computing</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>CLARKSON, Kenneth L</creatorcontrib><creatorcontrib>HAZAN, Elad</creatorcontrib><creatorcontrib>WOODRUFF, David P</creatorcontrib><collection>Pascal-Francis</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Mechanical &amp; Transportation Engineering Abstracts</collection><collection>Engineering Research Database</collection><jtitle>Journal of the ACM</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>CLARKSON, Kenneth L</au><au>HAZAN, Elad</au><au>WOODRUFF, David P</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Sublinear Optimization for Machine Learning</atitle><jtitle>Journal of the ACM</jtitle><date>2012-10-01</date><risdate>2012</risdate><volume>59</volume><issue>5</issue><spage>1</spage><epage>49</epage><pages>1-49</pages><issn>0004-5411</issn><eissn>1557-735X</eissn><coden>JACOAH</coden><abstract>In this article we describe and analyze sublinear-time approximation algorithms for some optimization problems arising in machine learning, such as training linear classifiers and finding minimum enclosing balls. Our algorithms can be extended to some kernelized versions of these problems, such as SVDD, hard margin SVM, and L 2 -SVM, for which sublinear-time algorithms were not known before. These new algorithms use a combination of a novel sampling techniques and a new multiplicative update algorithm. We give lower bounds which show the running times of many of our algorithms to be nearly best possible in the unit-cost RAM model.</abstract><cop>New York, NY</cop><pub>Association for Computing Machinery</pub><doi>10.1145/2371656.2371658</doi><tpages>49</tpages></addata></record>
fulltext fulltext
identifier ISSN: 0004-5411
ispartof Journal of the ACM, 2012-10, Vol.59 (5), p.1-49
issn 0004-5411
1557-735X
language eng
recordid cdi_proquest_miscellaneous_1506361775
source ACM Digital Library Complete
subjects Algorithmics. Computability. Computer arithmetics
Algorithms
Applied sciences
Artificial intelligence
Branch & bound algorithms
Computer science
control theory
systems
Exact sciences and technology
Linear programming
Machine learning
Optimization algorithms
Studies
Theoretical computing
title Sublinear Optimization for Machine Learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T00%3A23%3A01IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Sublinear%20Optimization%20for%20Machine%20Learning&rft.jtitle=Journal%20of%20the%20ACM&rft.au=CLARKSON,%20Kenneth%20L&rft.date=2012-10-01&rft.volume=59&rft.issue=5&rft.spage=1&rft.epage=49&rft.pages=1-49&rft.issn=0004-5411&rft.eissn=1557-735X&rft.coden=JACOAH&rft_id=info:doi/10.1145/2371656.2371658&rft_dat=%3Cproquest_cross%3E2818926901%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1170617617&rft_id=info:pmid/&rfr_iscdi=true