General Convergence Results for Linear Discriminant Updates
The problem of learning linear-discriminant concepts can be solved by various mistake-driven update procedures, including the Winnow family of algorithms and the well-known Perceptron algorithm. In this paper we define the general class of "quasi-additive" algorithms, which includes Percep...
Gespeichert in:
Veröffentlicht in: | Machine learning 2001-06, Vol.43 (3), p.173-210 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 210 |
---|---|
container_issue | 3 |
container_start_page | 173 |
container_title | Machine learning |
container_volume | 43 |
creator | Grove, Adam J Littlestone, Nick Schuurmans, Dale |
description | The problem of learning linear-discriminant concepts can be solved by various mistake-driven update procedures, including the Winnow family of algorithms and the well-known Perceptron algorithm. In this paper we define the general class of "quasi-additive" algorithms, which includes Perceptron and Winnow as special cases. We give a single proof of convergence that covers a broad subset of algorithms in this class, including both Perceptron and Winnow, but also many new algorithms. Our proof hinges on analyzing a generic measure of progress construction that gives insight as to when and how such algorithms converge. Our measure of progress construction also permits us to obtain good mistake bounds for individual algorithms. We apply our unified analysis to new algorithms as well as existing algorithms. When applied to known algorithms, our method "automatically" produces close variants of existing proofs (recovering similar bounds)--thus showing that, in a certain sense, these seemingly diverse results are fundamentally isomorphic. However, we also demonstrate that the unifying principles are more broadly applicable, and analyze a new class of algorithms that smoothly interpolate between the additive-update behavior of Perceptron and the multiplicative-update behavior of Winnow.[PUBLICATION ABSTRACT] |
doi_str_mv | 10.1023/A:1010844028087 |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_miscellaneous_26567057</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>26567057</sourcerecordid><originalsourceid>FETCH-LOGICAL-n240t-a20cba55b012c22c37f6fea129a54beedd0a99e1557eb0f854a1562cce5efcaa3</originalsourceid><addsrcrecordid>eNpdj0FLw0AQRhdRsFbPXoMHb9HZSWZ3o6fSahUKgthzmWwnkpJu6m7i77egJ0_f5fF4n1LXGu40YHE_e9CgwZUloANnT9REky1yIEOnagLOUW400rm6SGkHAGicmajHpQSJ3GXzPnxL_JTgJXuXNHZDypo-Zqs2CMds0SYf230bOAzZ-rDlQdKlOmu4S3L1t1O1fn76mL_kq7fl63y2ygOWMOSM4GsmqkGjR_SFbUwjrLFiKmuR7Ra4qkQTWamhcVSyJoPeC0njmYupuv31HmL_NUoaNvtjjXQdB-nHtEFDxsLx7FTd_AN3_RjDsW1jyQJWhYbiB_03V2I</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>757029310</pqid></control><display><type>article</type><title>General Convergence Results for Linear Discriminant Updates</title><source>Springer Nature - Complete Springer Journals</source><creator>Grove, Adam J ; Littlestone, Nick ; Schuurmans, Dale</creator><creatorcontrib>Grove, Adam J ; Littlestone, Nick ; Schuurmans, Dale</creatorcontrib><description>The problem of learning linear-discriminant concepts can be solved by various mistake-driven update procedures, including the Winnow family of algorithms and the well-known Perceptron algorithm. In this paper we define the general class of "quasi-additive" algorithms, which includes Perceptron and Winnow as special cases. We give a single proof of convergence that covers a broad subset of algorithms in this class, including both Perceptron and Winnow, but also many new algorithms. Our proof hinges on analyzing a generic measure of progress construction that gives insight as to when and how such algorithms converge. Our measure of progress construction also permits us to obtain good mistake bounds for individual algorithms. We apply our unified analysis to new algorithms as well as existing algorithms. When applied to known algorithms, our method "automatically" produces close variants of existing proofs (recovering similar bounds)--thus showing that, in a certain sense, these seemingly diverse results are fundamentally isomorphic. However, we also demonstrate that the unifying principles are more broadly applicable, and analyze a new class of algorithms that smoothly interpolate between the additive-update behavior of Perceptron and the multiplicative-update behavior of Winnow.[PUBLICATION ABSTRACT]</description><identifier>ISSN: 0885-6125</identifier><identifier>EISSN: 1573-0565</identifier><identifier>DOI: 10.1023/A:1010844028087</identifier><language>eng</language><publisher>Dordrecht: Springer Nature B.V</publisher><subject>Studies</subject><ispartof>Machine learning, 2001-06, Vol.43 (3), p.173-210</ispartof><rights>Kluwer Academic Publishers 2001</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,777,781,27905,27906</link.rule.ids></links><search><creatorcontrib>Grove, Adam J</creatorcontrib><creatorcontrib>Littlestone, Nick</creatorcontrib><creatorcontrib>Schuurmans, Dale</creatorcontrib><title>General Convergence Results for Linear Discriminant Updates</title><title>Machine learning</title><description>The problem of learning linear-discriminant concepts can be solved by various mistake-driven update procedures, including the Winnow family of algorithms and the well-known Perceptron algorithm. In this paper we define the general class of "quasi-additive" algorithms, which includes Perceptron and Winnow as special cases. We give a single proof of convergence that covers a broad subset of algorithms in this class, including both Perceptron and Winnow, but also many new algorithms. Our proof hinges on analyzing a generic measure of progress construction that gives insight as to when and how such algorithms converge. Our measure of progress construction also permits us to obtain good mistake bounds for individual algorithms. We apply our unified analysis to new algorithms as well as existing algorithms. When applied to known algorithms, our method "automatically" produces close variants of existing proofs (recovering similar bounds)--thus showing that, in a certain sense, these seemingly diverse results are fundamentally isomorphic. However, we also demonstrate that the unifying principles are more broadly applicable, and analyze a new class of algorithms that smoothly interpolate between the additive-update behavior of Perceptron and the multiplicative-update behavior of Winnow.[PUBLICATION ABSTRACT]</description><subject>Studies</subject><issn>0885-6125</issn><issn>1573-0565</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2001</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><recordid>eNpdj0FLw0AQRhdRsFbPXoMHb9HZSWZ3o6fSahUKgthzmWwnkpJu6m7i77egJ0_f5fF4n1LXGu40YHE_e9CgwZUloANnT9REky1yIEOnagLOUW400rm6SGkHAGicmajHpQSJ3GXzPnxL_JTgJXuXNHZDypo-Zqs2CMds0SYf230bOAzZ-rDlQdKlOmu4S3L1t1O1fn76mL_kq7fl63y2ygOWMOSM4GsmqkGjR_SFbUwjrLFiKmuR7Ra4qkQTWamhcVSyJoPeC0njmYupuv31HmL_NUoaNvtjjXQdB-nHtEFDxsLx7FTd_AN3_RjDsW1jyQJWhYbiB_03V2I</recordid><startdate>20010601</startdate><enddate>20010601</enddate><creator>Grove, Adam J</creator><creator>Littlestone, Nick</creator><creator>Schuurmans, Dale</creator><general>Springer Nature B.V</general><scope>3V.</scope><scope>7SC</scope><scope>7XB</scope><scope>88I</scope><scope>8AL</scope><scope>8AO</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0N</scope><scope>M2P</scope><scope>P5Z</scope><scope>P62</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>Q9U</scope></search><sort><creationdate>20010601</creationdate><title>General Convergence Results for Linear Discriminant Updates</title><author>Grove, Adam J ; Littlestone, Nick ; Schuurmans, Dale</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-n240t-a20cba55b012c22c37f6fea129a54beedd0a99e1557eb0f854a1562cce5efcaa3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2001</creationdate><topic>Studies</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Grove, Adam J</creatorcontrib><creatorcontrib>Littlestone, Nick</creatorcontrib><creatorcontrib>Schuurmans, Dale</creatorcontrib><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Science Database (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>ProQuest Pharma Collection</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Computing Database</collection><collection>Science Database</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ProQuest Central Basic</collection><jtitle>Machine learning</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Grove, Adam J</au><au>Littlestone, Nick</au><au>Schuurmans, Dale</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>General Convergence Results for Linear Discriminant Updates</atitle><jtitle>Machine learning</jtitle><date>2001-06-01</date><risdate>2001</risdate><volume>43</volume><issue>3</issue><spage>173</spage><epage>210</epage><pages>173-210</pages><issn>0885-6125</issn><eissn>1573-0565</eissn><abstract>The problem of learning linear-discriminant concepts can be solved by various mistake-driven update procedures, including the Winnow family of algorithms and the well-known Perceptron algorithm. In this paper we define the general class of "quasi-additive" algorithms, which includes Perceptron and Winnow as special cases. We give a single proof of convergence that covers a broad subset of algorithms in this class, including both Perceptron and Winnow, but also many new algorithms. Our proof hinges on analyzing a generic measure of progress construction that gives insight as to when and how such algorithms converge. Our measure of progress construction also permits us to obtain good mistake bounds for individual algorithms. We apply our unified analysis to new algorithms as well as existing algorithms. When applied to known algorithms, our method "automatically" produces close variants of existing proofs (recovering similar bounds)--thus showing that, in a certain sense, these seemingly diverse results are fundamentally isomorphic. However, we also demonstrate that the unifying principles are more broadly applicable, and analyze a new class of algorithms that smoothly interpolate between the additive-update behavior of Perceptron and the multiplicative-update behavior of Winnow.[PUBLICATION ABSTRACT]</abstract><cop>Dordrecht</cop><pub>Springer Nature B.V</pub><doi>10.1023/A:1010844028087</doi><tpages>38</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0885-6125 |
ispartof | Machine learning, 2001-06, Vol.43 (3), p.173-210 |
issn | 0885-6125 1573-0565 |
language | eng |
recordid | cdi_proquest_miscellaneous_26567057 |
source | Springer Nature - Complete Springer Journals |
subjects | Studies |
title | General Convergence Results for Linear Discriminant Updates |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-20T06%3A30%3A04IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=General%20Convergence%20Results%20for%20Linear%20Discriminant%20Updates&rft.jtitle=Machine%20learning&rft.au=Grove,%20Adam%20J&rft.date=2001-06-01&rft.volume=43&rft.issue=3&rft.spage=173&rft.epage=210&rft.pages=173-210&rft.issn=0885-6125&rft.eissn=1573-0565&rft_id=info:doi/10.1023/A:1010844028087&rft_dat=%3Cproquest%3E26567057%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=757029310&rft_id=info:pmid/&rfr_iscdi=true |