Scale-sensitive dimensions, uniform convergence, and learnability
Learnability in Valiant's PAC learning model has been shown to be strongly related to the existence of uniform laws of large numbers. These laws define a distribution-free convergence property of means to expectations uniformly over classes of random variables. Classes of real-valued functions...
Gespeichert in:
Veröffentlicht in: | Journal of the ACM 1997-07, Vol.44 (4), p.615-631 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 631 |
---|---|
container_issue | 4 |
container_start_page | 615 |
container_title | Journal of the ACM |
container_volume | 44 |
creator | Alon, Noga Ben-David, Shai Cesa-Bianchi, Nicolo Haussler, David |
description | Learnability in Valiant's PAC learning model has been shown to be strongly related to the existence of uniform laws of large numbers. These laws define a distribution-free convergence property of means to expectations uniformly over classes of random variables. Classes of real-valued functions enjoying such a property are also known as uniform Glivenko-Cantelli classes. In this paper, we prove, through a generalization of Sauer's lemma that may be interesting in its own right, a new characterization of uniform Glivenko-Cantelli classes. Our characterization yields Dudley, Gine´, and Zinn's previous characterization as a corollary. Furthermore, it is the first based on a Gine´, and Zinn's previous characterization as a corollary. Furthermore, it is the first based on a simple combinatorial quantity generalizing the Vapnik-Chervonenkis dimension. We apply this result to obtain the weakest combinatorial condition known to imply PAC learnability in the statistical regression (or “agnostic”) framework. Furthermore, we find a characterization of learnability in the probabilistic concept model, solving an open problem posed by Kearns and Schapire. These results show that the accuracy parameter plays a crucial role in determining the effective complexity of the learner's hypothesis class. |
doi_str_mv | 10.1145/263867.263927 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_29026690</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>27336350</sourcerecordid><originalsourceid>FETCH-LOGICAL-c462t-77d2e02118c050a6d43ababb809bb9711499edfb127dc51c79f283b43996195e3</originalsourceid><addsrcrecordid>eNqF0U1LxDAQBuAgCq6rR-_Fg3jYrpOkSZrjIn7BggcVvJUkTSVLm65Ju7D_3iz15EFPMwMPAzMvQpcYlhgX7JZwWnKxTEUScYRmmDGRC8o-jtEMAIqcFRiforMYN2kEAmKGVq9GtTaP1kc3uJ3Natcd-t7HRTZ61_Shy0zvdzZ8Wm_sIlO-zlqrglfatW7Yn6OTRrXRXvzUOXp_uH-7e8rXL4_Pd6t1bgpOhlyImlggGJcGGCheF1RppXUJUmsp0gFS2rrRmIjaMGyEbEhJdUGl5FgyS-foetq7Df3XaONQdS4a27bK236MFZFAOJfwPxSUcsoO8OZPiEsoQRAuWKJXv-imH9ML2qRkQUhaJxLKJ2RCH2OwTbUNrlNhX2GoDglVU0LVlBD9BkU6gRI</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>194223507</pqid></control><display><type>article</type><title>Scale-sensitive dimensions, uniform convergence, and learnability</title><source>ACM Digital Library</source><creator>Alon, Noga ; Ben-David, Shai ; Cesa-Bianchi, Nicolo ; Haussler, David</creator><creatorcontrib>Alon, Noga ; Ben-David, Shai ; Cesa-Bianchi, Nicolo ; Haussler, David</creatorcontrib><description>Learnability in Valiant's PAC learning model has been shown to be strongly related to the existence of uniform laws of large numbers. These laws define a distribution-free convergence property of means to expectations uniformly over classes of random variables. Classes of real-valued functions enjoying such a property are also known as uniform Glivenko-Cantelli classes. In this paper, we prove, through a generalization of Sauer's lemma that may be interesting in its own right, a new characterization of uniform Glivenko-Cantelli classes. Our characterization yields Dudley, Gine´, and Zinn's previous characterization as a corollary. Furthermore, it is the first based on a Gine´, and Zinn's previous characterization as a corollary. Furthermore, it is the first based on a simple combinatorial quantity generalizing the Vapnik-Chervonenkis dimension. We apply this result to obtain the weakest combinatorial condition known to imply PAC learnability in the statistical regression (or “agnostic”) framework. Furthermore, we find a characterization of learnability in the probabilistic concept model, solving an open problem posed by Kearns and Schapire. These results show that the accuracy parameter plays a crucial role in determining the effective complexity of the learner's hypothesis class.</description><identifier>ISSN: 0004-5411</identifier><identifier>EISSN: 1557-735X</identifier><identifier>DOI: 10.1145/263867.263927</identifier><identifier>CODEN: JACOAH</identifier><language>eng</language><publisher>New York: Association for Computing Machinery</publisher><subject>Artificial intelligence ; Combinatorial analysis ; Computer programming ; Convergence ; Hypotheses ; Learning ; Mathematical functions ; Mathematical models ; Mathematics ; Probabilistic methods ; Probability theory ; Random variables ; Regression ; Studies</subject><ispartof>Journal of the ACM, 1997-07, Vol.44 (4), p.615-631</ispartof><rights>Copyright Association for Computing Machinery Jul 1997</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c462t-77d2e02118c050a6d43ababb809bb9711499edfb127dc51c79f283b43996195e3</citedby><cites>FETCH-LOGICAL-c462t-77d2e02118c050a6d43ababb809bb9711499edfb127dc51c79f283b43996195e3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>315,781,785,27929,27930</link.rule.ids></links><search><creatorcontrib>Alon, Noga</creatorcontrib><creatorcontrib>Ben-David, Shai</creatorcontrib><creatorcontrib>Cesa-Bianchi, Nicolo</creatorcontrib><creatorcontrib>Haussler, David</creatorcontrib><title>Scale-sensitive dimensions, uniform convergence, and learnability</title><title>Journal of the ACM</title><description>Learnability in Valiant's PAC learning model has been shown to be strongly related to the existence of uniform laws of large numbers. These laws define a distribution-free convergence property of means to expectations uniformly over classes of random variables. Classes of real-valued functions enjoying such a property are also known as uniform Glivenko-Cantelli classes. In this paper, we prove, through a generalization of Sauer's lemma that may be interesting in its own right, a new characterization of uniform Glivenko-Cantelli classes. Our characterization yields Dudley, Gine´, and Zinn's previous characterization as a corollary. Furthermore, it is the first based on a Gine´, and Zinn's previous characterization as a corollary. Furthermore, it is the first based on a simple combinatorial quantity generalizing the Vapnik-Chervonenkis dimension. We apply this result to obtain the weakest combinatorial condition known to imply PAC learnability in the statistical regression (or “agnostic”) framework. Furthermore, we find a characterization of learnability in the probabilistic concept model, solving an open problem posed by Kearns and Schapire. These results show that the accuracy parameter plays a crucial role in determining the effective complexity of the learner's hypothesis class.</description><subject>Artificial intelligence</subject><subject>Combinatorial analysis</subject><subject>Computer programming</subject><subject>Convergence</subject><subject>Hypotheses</subject><subject>Learning</subject><subject>Mathematical functions</subject><subject>Mathematical models</subject><subject>Mathematics</subject><subject>Probabilistic methods</subject><subject>Probability theory</subject><subject>Random variables</subject><subject>Regression</subject><subject>Studies</subject><issn>0004-5411</issn><issn>1557-735X</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>1997</creationdate><recordtype>article</recordtype><recordid>eNqF0U1LxDAQBuAgCq6rR-_Fg3jYrpOkSZrjIn7BggcVvJUkTSVLm65Ju7D_3iz15EFPMwMPAzMvQpcYlhgX7JZwWnKxTEUScYRmmDGRC8o-jtEMAIqcFRiforMYN2kEAmKGVq9GtTaP1kc3uJ3Natcd-t7HRTZ61_Shy0zvdzZ8Wm_sIlO-zlqrglfatW7Yn6OTRrXRXvzUOXp_uH-7e8rXL4_Pd6t1bgpOhlyImlggGJcGGCheF1RppXUJUmsp0gFS2rrRmIjaMGyEbEhJdUGl5FgyS-foetq7Df3XaONQdS4a27bK236MFZFAOJfwPxSUcsoO8OZPiEsoQRAuWKJXv-imH9ML2qRkQUhaJxLKJ2RCH2OwTbUNrlNhX2GoDglVU0LVlBD9BkU6gRI</recordid><startdate>19970701</startdate><enddate>19970701</enddate><creator>Alon, Noga</creator><creator>Ben-David, Shai</creator><creator>Cesa-Bianchi, Nicolo</creator><creator>Haussler, David</creator><general>Association for Computing Machinery</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>19970701</creationdate><title>Scale-sensitive dimensions, uniform convergence, and learnability</title><author>Alon, Noga ; Ben-David, Shai ; Cesa-Bianchi, Nicolo ; Haussler, David</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c462t-77d2e02118c050a6d43ababb809bb9711499edfb127dc51c79f283b43996195e3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>1997</creationdate><topic>Artificial intelligence</topic><topic>Combinatorial analysis</topic><topic>Computer programming</topic><topic>Convergence</topic><topic>Hypotheses</topic><topic>Learning</topic><topic>Mathematical functions</topic><topic>Mathematical models</topic><topic>Mathematics</topic><topic>Probabilistic methods</topic><topic>Probability theory</topic><topic>Random variables</topic><topic>Regression</topic><topic>Studies</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Alon, Noga</creatorcontrib><creatorcontrib>Ben-David, Shai</creatorcontrib><creatorcontrib>Cesa-Bianchi, Nicolo</creatorcontrib><creatorcontrib>Haussler, David</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Journal of the ACM</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Alon, Noga</au><au>Ben-David, Shai</au><au>Cesa-Bianchi, Nicolo</au><au>Haussler, David</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Scale-sensitive dimensions, uniform convergence, and learnability</atitle><jtitle>Journal of the ACM</jtitle><date>1997-07-01</date><risdate>1997</risdate><volume>44</volume><issue>4</issue><spage>615</spage><epage>631</epage><pages>615-631</pages><issn>0004-5411</issn><eissn>1557-735X</eissn><coden>JACOAH</coden><abstract>Learnability in Valiant's PAC learning model has been shown to be strongly related to the existence of uniform laws of large numbers. These laws define a distribution-free convergence property of means to expectations uniformly over classes of random variables. Classes of real-valued functions enjoying such a property are also known as uniform Glivenko-Cantelli classes. In this paper, we prove, through a generalization of Sauer's lemma that may be interesting in its own right, a new characterization of uniform Glivenko-Cantelli classes. Our characterization yields Dudley, Gine´, and Zinn's previous characterization as a corollary. Furthermore, it is the first based on a Gine´, and Zinn's previous characterization as a corollary. Furthermore, it is the first based on a simple combinatorial quantity generalizing the Vapnik-Chervonenkis dimension. We apply this result to obtain the weakest combinatorial condition known to imply PAC learnability in the statistical regression (or “agnostic”) framework. Furthermore, we find a characterization of learnability in the probabilistic concept model, solving an open problem posed by Kearns and Schapire. These results show that the accuracy parameter plays a crucial role in determining the effective complexity of the learner's hypothesis class.</abstract><cop>New York</cop><pub>Association for Computing Machinery</pub><doi>10.1145/263867.263927</doi><tpages>17</tpages><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0004-5411 |
ispartof | Journal of the ACM, 1997-07, Vol.44 (4), p.615-631 |
issn | 0004-5411 1557-735X |
language | eng |
recordid | cdi_proquest_miscellaneous_29026690 |
source | ACM Digital Library |
subjects | Artificial intelligence Combinatorial analysis Computer programming Convergence Hypotheses Learning Mathematical functions Mathematical models Mathematics Probabilistic methods Probability theory Random variables Regression Studies |
title | Scale-sensitive dimensions, uniform convergence, and learnability |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-14T11%3A09%3A10IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Scale-sensitive%20dimensions,%20uniform%20convergence,%20and%20learnability&rft.jtitle=Journal%20of%20the%20ACM&rft.au=Alon,%20Noga&rft.date=1997-07-01&rft.volume=44&rft.issue=4&rft.spage=615&rft.epage=631&rft.pages=615-631&rft.issn=0004-5411&rft.eissn=1557-735X&rft.coden=JACOAH&rft_id=info:doi/10.1145/263867.263927&rft_dat=%3Cproquest_cross%3E27336350%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=194223507&rft_id=info:pmid/&rfr_iscdi=true |