Learning by the Process of Elimination

Elimination of potential hypotheses is a fundamental component of many learning processes. In order to understand the nature of elimination, herein we study the following model of learning recursive functions from examples. On any target function, the learning machine has to eliminate all, save one,...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Information and computation 2002-07, Vol.176 (1), p.37-50
Hauptverfasser: Freivalds, Rūsiņš, Karpinski, Marek, Smith, Carl H., Wiehagen, Rolf
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 50
container_issue 1
container_start_page 37
container_title Information and computation
container_volume 176
creator Freivalds, Rūsiņš
Karpinski, Marek
Smith, Carl H.
Wiehagen, Rolf
description Elimination of potential hypotheses is a fundamental component of many learning processes. In order to understand the nature of elimination, herein we study the following model of learning recursive functions from examples. On any target function, the learning machine has to eliminate all, save one, possible hypotheses such that the missing one correctly describes the target function. It turns out that this type of learning by the process of elimination (elm-learning, for short) can be stronger, weaker or of the same power as usual Gold style learning. While for usual learning any r.e. class of recursive functions can be learned in all of its numberings, this is no longer true for elm-learning. For elm-learnability of an r.e. class in a given of its numberings, we derive sufficient conditions of this numbering (decidability of index equivalence and paddability) as well as a condition being both necessary and sufficient. Then we deal with the problem of which r.e. classes are elm-learnable in all of their numberings and which are not. Elm-learning of arbitrary classes of recursive function is shown to be of the same power as usual learning. For elm-learnability of an arbitrary class in an arbitrary numbering, paddability of this numbering remains to be useful, whereas decidability of index equivalence can be “maximally weak” or “extremely useful”. We also give a characterization for elm-learnability of an arbitrary class of recursive functions. Finally, we consider some generalizations of elm-learning. One of them is of the same power as usual learning by teams. A further generalization even allows to learn the class of all recursive functions.
doi_str_mv 10.1006/inco.2001.2922
format Article
fullrecord <record><control><sourceid>elsevier_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1006_inco_2001_2922</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0890540101929220</els_id><sourcerecordid>S0890540101929220</sourcerecordid><originalsourceid>FETCH-LOGICAL-c356t-d5100e0f1c411bf9a56fdeb6347c701f95a18ddfc366779245bc67f1832f348f3</originalsourceid><addsrcrecordid>eNp1j01LxDAQhoMoWFevnnvRW2smadL0KMv6AQU96Dmk00Qj3XRJirD_3pYKnjzNHN5n5n0IuQZaAqXyzgccS0YplKxh7IRkQBtaMCnglGRUzbuoKJyTi5S-5hSISmbktrUmBh8-8u6YT582f40j2pTy0eW7we99MJMfwyU5c2ZI9up3bsj7w-5t-1S0L4_P2_u2QC7kVPRibmKpA6wAOtcYIV1vO8mrGmsKrhEGVN875FLWdcMq0aGsHSjOHK-U4xtSrncxjilF6_Qh-r2JRw1UL5Z6sdSLpV4sZ-BmBQ4moRlcNAF9-qO4ElAzNefUmrNz-29vo07obUDb-2hx0v3o_3vxA-RtZCo</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Learning by the Process of Elimination</title><source>Access via ScienceDirect (Elsevier)</source><source>EZB Electronic Journals Library</source><creator>Freivalds, Rūsiņš ; Karpinski, Marek ; Smith, Carl H. ; Wiehagen, Rolf</creator><creatorcontrib>Freivalds, Rūsiņš ; Karpinski, Marek ; Smith, Carl H. ; Wiehagen, Rolf</creatorcontrib><description>Elimination of potential hypotheses is a fundamental component of many learning processes. In order to understand the nature of elimination, herein we study the following model of learning recursive functions from examples. On any target function, the learning machine has to eliminate all, save one, possible hypotheses such that the missing one correctly describes the target function. It turns out that this type of learning by the process of elimination (elm-learning, for short) can be stronger, weaker or of the same power as usual Gold style learning. While for usual learning any r.e. class of recursive functions can be learned in all of its numberings, this is no longer true for elm-learning. For elm-learnability of an r.e. class in a given of its numberings, we derive sufficient conditions of this numbering (decidability of index equivalence and paddability) as well as a condition being both necessary and sufficient. Then we deal with the problem of which r.e. classes are elm-learnable in all of their numberings and which are not. Elm-learning of arbitrary classes of recursive function is shown to be of the same power as usual learning. For elm-learnability of an arbitrary class in an arbitrary numbering, paddability of this numbering remains to be useful, whereas decidability of index equivalence can be “maximally weak” or “extremely useful”. We also give a characterization for elm-learnability of an arbitrary class of recursive functions. Finally, we consider some generalizations of elm-learning. One of them is of the same power as usual learning by teams. A further generalization even allows to learn the class of all recursive functions.</description><identifier>ISSN: 0890-5401</identifier><identifier>EISSN: 1090-2651</identifier><identifier>DOI: 10.1006/inco.2001.2922</identifier><identifier>CODEN: INFCEC</identifier><language>eng</language><publisher>San Diego, CA: Elsevier Inc</publisher><subject>Applied sciences ; Artificial intelligence ; Computer science; control theory; systems ; Exact sciences and technology ; Learning and adaptive systems</subject><ispartof>Information and computation, 2002-07, Vol.176 (1), p.37-50</ispartof><rights>2002 Elsevier Science (USA)</rights><rights>2002 INIST-CNRS</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c356t-d5100e0f1c411bf9a56fdeb6347c701f95a18ddfc366779245bc67f1832f348f3</citedby><cites>FETCH-LOGICAL-c356t-d5100e0f1c411bf9a56fdeb6347c701f95a18ddfc366779245bc67f1832f348f3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://dx.doi.org/10.1006/inco.2001.2922$$EHTML$$P50$$Gelsevier$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,3550,27924,27925,45995</link.rule.ids><backlink>$$Uhttp://pascal-francis.inist.fr/vibad/index.php?action=getRecordDetail&amp;idt=13851728$$DView record in Pascal Francis$$Hfree_for_read</backlink></links><search><creatorcontrib>Freivalds, Rūsiņš</creatorcontrib><creatorcontrib>Karpinski, Marek</creatorcontrib><creatorcontrib>Smith, Carl H.</creatorcontrib><creatorcontrib>Wiehagen, Rolf</creatorcontrib><title>Learning by the Process of Elimination</title><title>Information and computation</title><description>Elimination of potential hypotheses is a fundamental component of many learning processes. In order to understand the nature of elimination, herein we study the following model of learning recursive functions from examples. On any target function, the learning machine has to eliminate all, save one, possible hypotheses such that the missing one correctly describes the target function. It turns out that this type of learning by the process of elimination (elm-learning, for short) can be stronger, weaker or of the same power as usual Gold style learning. While for usual learning any r.e. class of recursive functions can be learned in all of its numberings, this is no longer true for elm-learning. For elm-learnability of an r.e. class in a given of its numberings, we derive sufficient conditions of this numbering (decidability of index equivalence and paddability) as well as a condition being both necessary and sufficient. Then we deal with the problem of which r.e. classes are elm-learnable in all of their numberings and which are not. Elm-learning of arbitrary classes of recursive function is shown to be of the same power as usual learning. For elm-learnability of an arbitrary class in an arbitrary numbering, paddability of this numbering remains to be useful, whereas decidability of index equivalence can be “maximally weak” or “extremely useful”. We also give a characterization for elm-learnability of an arbitrary class of recursive functions. Finally, we consider some generalizations of elm-learning. One of them is of the same power as usual learning by teams. A further generalization even allows to learn the class of all recursive functions.</description><subject>Applied sciences</subject><subject>Artificial intelligence</subject><subject>Computer science; control theory; systems</subject><subject>Exact sciences and technology</subject><subject>Learning and adaptive systems</subject><issn>0890-5401</issn><issn>1090-2651</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2002</creationdate><recordtype>article</recordtype><recordid>eNp1j01LxDAQhoMoWFevnnvRW2smadL0KMv6AQU96Dmk00Qj3XRJirD_3pYKnjzNHN5n5n0IuQZaAqXyzgccS0YplKxh7IRkQBtaMCnglGRUzbuoKJyTi5S-5hSISmbktrUmBh8-8u6YT582f40j2pTy0eW7we99MJMfwyU5c2ZI9up3bsj7w-5t-1S0L4_P2_u2QC7kVPRibmKpA6wAOtcYIV1vO8mrGmsKrhEGVN875FLWdcMq0aGsHSjOHK-U4xtSrncxjilF6_Qh-r2JRw1UL5Z6sdSLpV4sZ-BmBQ4moRlcNAF9-qO4ElAzNefUmrNz-29vo07obUDb-2hx0v3o_3vxA-RtZCo</recordid><startdate>20020710</startdate><enddate>20020710</enddate><creator>Freivalds, Rūsiņš</creator><creator>Karpinski, Marek</creator><creator>Smith, Carl H.</creator><creator>Wiehagen, Rolf</creator><general>Elsevier Inc</general><general>Elsevier</general><scope>6I.</scope><scope>AAFTH</scope><scope>IQODW</scope><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>20020710</creationdate><title>Learning by the Process of Elimination</title><author>Freivalds, Rūsiņš ; Karpinski, Marek ; Smith, Carl H. ; Wiehagen, Rolf</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c356t-d5100e0f1c411bf9a56fdeb6347c701f95a18ddfc366779245bc67f1832f348f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2002</creationdate><topic>Applied sciences</topic><topic>Artificial intelligence</topic><topic>Computer science; control theory; systems</topic><topic>Exact sciences and technology</topic><topic>Learning and adaptive systems</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Freivalds, Rūsiņš</creatorcontrib><creatorcontrib>Karpinski, Marek</creatorcontrib><creatorcontrib>Smith, Carl H.</creatorcontrib><creatorcontrib>Wiehagen, Rolf</creatorcontrib><collection>ScienceDirect Open Access Titles</collection><collection>Elsevier:ScienceDirect:Open Access</collection><collection>Pascal-Francis</collection><collection>CrossRef</collection><jtitle>Information and computation</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Freivalds, Rūsiņš</au><au>Karpinski, Marek</au><au>Smith, Carl H.</au><au>Wiehagen, Rolf</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Learning by the Process of Elimination</atitle><jtitle>Information and computation</jtitle><date>2002-07-10</date><risdate>2002</risdate><volume>176</volume><issue>1</issue><spage>37</spage><epage>50</epage><pages>37-50</pages><issn>0890-5401</issn><eissn>1090-2651</eissn><coden>INFCEC</coden><abstract>Elimination of potential hypotheses is a fundamental component of many learning processes. In order to understand the nature of elimination, herein we study the following model of learning recursive functions from examples. On any target function, the learning machine has to eliminate all, save one, possible hypotheses such that the missing one correctly describes the target function. It turns out that this type of learning by the process of elimination (elm-learning, for short) can be stronger, weaker or of the same power as usual Gold style learning. While for usual learning any r.e. class of recursive functions can be learned in all of its numberings, this is no longer true for elm-learning. For elm-learnability of an r.e. class in a given of its numberings, we derive sufficient conditions of this numbering (decidability of index equivalence and paddability) as well as a condition being both necessary and sufficient. Then we deal with the problem of which r.e. classes are elm-learnable in all of their numberings and which are not. Elm-learning of arbitrary classes of recursive function is shown to be of the same power as usual learning. For elm-learnability of an arbitrary class in an arbitrary numbering, paddability of this numbering remains to be useful, whereas decidability of index equivalence can be “maximally weak” or “extremely useful”. We also give a characterization for elm-learnability of an arbitrary class of recursive functions. Finally, we consider some generalizations of elm-learning. One of them is of the same power as usual learning by teams. A further generalization even allows to learn the class of all recursive functions.</abstract><cop>San Diego, CA</cop><pub>Elsevier Inc</pub><doi>10.1006/inco.2001.2922</doi><tpages>14</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0890-5401
ispartof Information and computation, 2002-07, Vol.176 (1), p.37-50
issn 0890-5401
1090-2651
language eng
recordid cdi_crossref_primary_10_1006_inco_2001_2922
source Access via ScienceDirect (Elsevier); EZB Electronic Journals Library
subjects Applied sciences
Artificial intelligence
Computer science
control theory
systems
Exact sciences and technology
Learning and adaptive systems
title Learning by the Process of Elimination
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-20T17%3A47%3A31IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-elsevier_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Learning%20by%20the%20Process%20of%20Elimination&rft.jtitle=Information%20and%20computation&rft.au=Freivalds,%20R%C5%ABsi%C5%86%C5%A1&rft.date=2002-07-10&rft.volume=176&rft.issue=1&rft.spage=37&rft.epage=50&rft.pages=37-50&rft.issn=0890-5401&rft.eissn=1090-2651&rft.coden=INFCEC&rft_id=info:doi/10.1006/inco.2001.2922&rft_dat=%3Celsevier_cross%3ES0890540101929220%3C/elsevier_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_els_id=S0890540101929220&rfr_iscdi=true