Algorithmic identification of probabilities is hard

Reading more and more bits from an infinite binary sequence that is random for a Bernoulli measure with parameter p, we can get better and better approximations of p using the strong law of large numbers. In this paper, we study a similar situation from the viewpoint of inductive inference. Assume t...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of computer and system sciences 2018-08, Vol.95, p.98-108
Hauptverfasser: Bienvenu, Laurent, Figueira, Santiago, Monin, Benoit, Shen, Alexander
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Reading more and more bits from an infinite binary sequence that is random for a Bernoulli measure with parameter p, we can get better and better approximations of p using the strong law of large numbers. In this paper, we study a similar situation from the viewpoint of inductive inference. Assume that p is a computable real, and we have to eventually guess the program that computes p. We show that this cannot be done computably, and extend this result to more general computable distributions. We also provide a weak positive result showing that looking at a sequence X generated according to some computable probability measure, we can guess a sequence of algorithms that, starting from some point, compute a measure that makes X Martin-Löf random. •Inductive inference of probability measures from their random elements is studied.•We disprove the main claim of the original paper by Vitanyi and Chater.•We indeed show that learning cannot be achieved if we require bounded deficiency.•If we remove the bounded deficiency requirement, we do get a weak positive result.
ISSN:0022-0000
1090-2724
DOI:10.1016/j.jcss.2018.01.002