Learning by the Process of Elimination

Elimination of potential hypotheses is a fundamental component of many learning processes. In order to understand the nature of elimination, herein we study the following model of learning recursive functions from examples. On any target function, the learning machine has to eliminate all, save one,...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Information and computation 2002-07, Vol.176 (1), p.37-50
Hauptverfasser: Freivalds, Rūsiņš, Karpinski, Marek, Smith, Carl H., Wiehagen, Rolf
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Elimination of potential hypotheses is a fundamental component of many learning processes. In order to understand the nature of elimination, herein we study the following model of learning recursive functions from examples. On any target function, the learning machine has to eliminate all, save one, possible hypotheses such that the missing one correctly describes the target function. It turns out that this type of learning by the process of elimination (elm-learning, for short) can be stronger, weaker or of the same power as usual Gold style learning. While for usual learning any r.e. class of recursive functions can be learned in all of its numberings, this is no longer true for elm-learning. For elm-learnability of an r.e. class in a given of its numberings, we derive sufficient conditions of this numbering (decidability of index equivalence and paddability) as well as a condition being both necessary and sufficient. Then we deal with the problem of which r.e. classes are elm-learnable in all of their numberings and which are not. Elm-learning of arbitrary classes of recursive function is shown to be of the same power as usual learning. For elm-learnability of an arbitrary class in an arbitrary numbering, paddability of this numbering remains to be useful, whereas decidability of index equivalence can be “maximally weak” or “extremely useful”. We also give a characterization for elm-learnability of an arbitrary class of recursive functions. Finally, we consider some generalizations of elm-learning. One of them is of the same power as usual learning by teams. A further generalization even allows to learn the class of all recursive functions.
ISSN:0890-5401
1090-2651
DOI:10.1006/inco.2001.2922