Algorithmic identification of probabilities is hard
Reading more and more bits from an infinite binary sequence that is random for a Bernoulli measure with parameter p, we can get better and better approximations of p using the strong law of large numbers. In this paper, we study a similar situation from the viewpoint of inductive inference. Assume t...
Gespeichert in:
Veröffentlicht in: | Journal of computer and system sciences 2018-08, Vol.95, p.98-108 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 108 |
---|---|
container_issue | |
container_start_page | 98 |
container_title | Journal of computer and system sciences |
container_volume | 95 |
creator | Bienvenu, Laurent Figueira, Santiago Monin, Benoit Shen, Alexander |
description | Reading more and more bits from an infinite binary sequence that is random for a Bernoulli measure with parameter p, we can get better and better approximations of p using the strong law of large numbers. In this paper, we study a similar situation from the viewpoint of inductive inference. Assume that p is a computable real, and we have to eventually guess the program that computes p. We show that this cannot be done computably, and extend this result to more general computable distributions. We also provide a weak positive result showing that looking at a sequence X generated according to some computable probability measure, we can guess a sequence of algorithms that, starting from some point, compute a measure that makes X Martin-Löf random.
•Inductive inference of probability measures from their random elements is studied.•We disprove the main claim of the original paper by Vitanyi and Chater.•We indeed show that learning cannot be achieved if we require bounded deficiency.•If we remove the bounded deficiency requirement, we do get a weak positive result. |
doi_str_mv | 10.1016/j.jcss.2018.01.002 |
format | Article |
fullrecord | <record><control><sourceid>elsevier_hal_p</sourceid><recordid>TN_cdi_hal_primary_oai_HAL_lirmm_01803441v1</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0022000018301193</els_id><sourcerecordid>S0022000018301193</sourcerecordid><originalsourceid>FETCH-LOGICAL-c380t-d1e7f7ab2ab79a86b9b93f5fdd81a569f875fbd3d48ccd4a8ee1f886cb2d67ae3</originalsourceid><addsrcrecordid>eNp9kE1LAzEQhoMoWKt_wNPeZddMsh9Z8FKKWqHgRc8hn3aW3aYkS8F_b0rFo3MZGN5n4H0IuQdaAYX2cagGk1LFKIiKQkUpuyALoD0tWcfqS7LIF1bSPNfkJqWBUoCm5QvCV-NXiDjvJjQFWref0aNRM4Z9EXxxiEErjSPO6FKBqdipaG_JlVdjcne_e0k-X54_1pty-_76tl5tS8MFnUsLrvOd0kzprlei1b3uuW-8tQJU0_ZedI3XlttaGGNrJZwDL0RrNLNtpxxfkofz350a5SHipOK3DArlZrWVI8Zpkrkv5XUNR8hpdk6bGFKKzv8hQOVJkhzkSZI8ScqgzEoy9HSGXO5xRBdlMuj2xlmMzszSBvwP_wGkK3D0</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Algorithmic identification of probabilities is hard</title><source>Elsevier ScienceDirect Journals</source><source>Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals</source><creator>Bienvenu, Laurent ; Figueira, Santiago ; Monin, Benoit ; Shen, Alexander</creator><creatorcontrib>Bienvenu, Laurent ; Figueira, Santiago ; Monin, Benoit ; Shen, Alexander</creatorcontrib><description>Reading more and more bits from an infinite binary sequence that is random for a Bernoulli measure with parameter p, we can get better and better approximations of p using the strong law of large numbers. In this paper, we study a similar situation from the viewpoint of inductive inference. Assume that p is a computable real, and we have to eventually guess the program that computes p. We show that this cannot be done computably, and extend this result to more general computable distributions. We also provide a weak positive result showing that looking at a sequence X generated according to some computable probability measure, we can guess a sequence of algorithms that, starting from some point, compute a measure that makes X Martin-Löf random.
•Inductive inference of probability measures from their random elements is studied.•We disprove the main claim of the original paper by Vitanyi and Chater.•We indeed show that learning cannot be achieved if we require bounded deficiency.•If we remove the bounded deficiency requirement, we do get a weak positive result.</description><identifier>ISSN: 0022-0000</identifier><identifier>EISSN: 1090-2724</identifier><identifier>DOI: 10.1016/j.jcss.2018.01.002</identifier><language>eng</language><publisher>Elsevier Inc</publisher><subject>Algorithmic learning theory ; Algorithmic randomness ; Computer Science ; Information Theory ; Logic ; Mathematics ; Probability</subject><ispartof>Journal of computer and system sciences, 2018-08, Vol.95, p.98-108</ispartof><rights>2018 Elsevier Inc.</rights><rights>Distributed under a Creative Commons Attribution 4.0 International License</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c380t-d1e7f7ab2ab79a86b9b93f5fdd81a569f875fbd3d48ccd4a8ee1f886cb2d67ae3</citedby><cites>FETCH-LOGICAL-c380t-d1e7f7ab2ab79a86b9b93f5fdd81a569f875fbd3d48ccd4a8ee1f886cb2d67ae3</cites><orcidid>0000-0001-8605-7734 ; 0000-0002-9638-3362</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.sciencedirect.com/science/article/pii/S0022000018301193$$EHTML$$P50$$Gelsevier$$H</linktohtml><link.rule.ids>230,314,776,780,881,3537,27901,27902,65306</link.rule.ids><backlink>$$Uhttps://hal-lirmm.ccsd.cnrs.fr/lirmm-01803441$$DView record in HAL$$Hfree_for_read</backlink></links><search><creatorcontrib>Bienvenu, Laurent</creatorcontrib><creatorcontrib>Figueira, Santiago</creatorcontrib><creatorcontrib>Monin, Benoit</creatorcontrib><creatorcontrib>Shen, Alexander</creatorcontrib><title>Algorithmic identification of probabilities is hard</title><title>Journal of computer and system sciences</title><description>Reading more and more bits from an infinite binary sequence that is random for a Bernoulli measure with parameter p, we can get better and better approximations of p using the strong law of large numbers. In this paper, we study a similar situation from the viewpoint of inductive inference. Assume that p is a computable real, and we have to eventually guess the program that computes p. We show that this cannot be done computably, and extend this result to more general computable distributions. We also provide a weak positive result showing that looking at a sequence X generated according to some computable probability measure, we can guess a sequence of algorithms that, starting from some point, compute a measure that makes X Martin-Löf random.
•Inductive inference of probability measures from their random elements is studied.•We disprove the main claim of the original paper by Vitanyi and Chater.•We indeed show that learning cannot be achieved if we require bounded deficiency.•If we remove the bounded deficiency requirement, we do get a weak positive result.</description><subject>Algorithmic learning theory</subject><subject>Algorithmic randomness</subject><subject>Computer Science</subject><subject>Information Theory</subject><subject>Logic</subject><subject>Mathematics</subject><subject>Probability</subject><issn>0022-0000</issn><issn>1090-2724</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><recordid>eNp9kE1LAzEQhoMoWKt_wNPeZddMsh9Z8FKKWqHgRc8hn3aW3aYkS8F_b0rFo3MZGN5n4H0IuQdaAYX2cagGk1LFKIiKQkUpuyALoD0tWcfqS7LIF1bSPNfkJqWBUoCm5QvCV-NXiDjvJjQFWref0aNRM4Z9EXxxiEErjSPO6FKBqdipaG_JlVdjcne_e0k-X54_1pty-_76tl5tS8MFnUsLrvOd0kzprlei1b3uuW-8tQJU0_ZedI3XlttaGGNrJZwDL0RrNLNtpxxfkofz350a5SHipOK3DArlZrWVI8Zpkrkv5XUNR8hpdk6bGFKKzv8hQOVJkhzkSZI8ScqgzEoy9HSGXO5xRBdlMuj2xlmMzszSBvwP_wGkK3D0</recordid><startdate>20180801</startdate><enddate>20180801</enddate><creator>Bienvenu, Laurent</creator><creator>Figueira, Santiago</creator><creator>Monin, Benoit</creator><creator>Shen, Alexander</creator><general>Elsevier Inc</general><general>Elsevier</general><scope>AAYXX</scope><scope>CITATION</scope><scope>1XC</scope><orcidid>https://orcid.org/0000-0001-8605-7734</orcidid><orcidid>https://orcid.org/0000-0002-9638-3362</orcidid></search><sort><creationdate>20180801</creationdate><title>Algorithmic identification of probabilities is hard</title><author>Bienvenu, Laurent ; Figueira, Santiago ; Monin, Benoit ; Shen, Alexander</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c380t-d1e7f7ab2ab79a86b9b93f5fdd81a569f875fbd3d48ccd4a8ee1f886cb2d67ae3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Algorithmic learning theory</topic><topic>Algorithmic randomness</topic><topic>Computer Science</topic><topic>Information Theory</topic><topic>Logic</topic><topic>Mathematics</topic><topic>Probability</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Bienvenu, Laurent</creatorcontrib><creatorcontrib>Figueira, Santiago</creatorcontrib><creatorcontrib>Monin, Benoit</creatorcontrib><creatorcontrib>Shen, Alexander</creatorcontrib><collection>CrossRef</collection><collection>Hyper Article en Ligne (HAL)</collection><jtitle>Journal of computer and system sciences</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Bienvenu, Laurent</au><au>Figueira, Santiago</au><au>Monin, Benoit</au><au>Shen, Alexander</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Algorithmic identification of probabilities is hard</atitle><jtitle>Journal of computer and system sciences</jtitle><date>2018-08-01</date><risdate>2018</risdate><volume>95</volume><spage>98</spage><epage>108</epage><pages>98-108</pages><issn>0022-0000</issn><eissn>1090-2724</eissn><abstract>Reading more and more bits from an infinite binary sequence that is random for a Bernoulli measure with parameter p, we can get better and better approximations of p using the strong law of large numbers. In this paper, we study a similar situation from the viewpoint of inductive inference. Assume that p is a computable real, and we have to eventually guess the program that computes p. We show that this cannot be done computably, and extend this result to more general computable distributions. We also provide a weak positive result showing that looking at a sequence X generated according to some computable probability measure, we can guess a sequence of algorithms that, starting from some point, compute a measure that makes X Martin-Löf random.
•Inductive inference of probability measures from their random elements is studied.•We disprove the main claim of the original paper by Vitanyi and Chater.•We indeed show that learning cannot be achieved if we require bounded deficiency.•If we remove the bounded deficiency requirement, we do get a weak positive result.</abstract><pub>Elsevier Inc</pub><doi>10.1016/j.jcss.2018.01.002</doi><tpages>11</tpages><orcidid>https://orcid.org/0000-0001-8605-7734</orcidid><orcidid>https://orcid.org/0000-0002-9638-3362</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0022-0000 |
ispartof | Journal of computer and system sciences, 2018-08, Vol.95, p.98-108 |
issn | 0022-0000 1090-2724 |
language | eng |
recordid | cdi_hal_primary_oai_HAL_lirmm_01803441v1 |
source | Elsevier ScienceDirect Journals; Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals |
subjects | Algorithmic learning theory Algorithmic randomness Computer Science Information Theory Logic Mathematics Probability |
title | Algorithmic identification of probabilities is hard |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-31T11%3A47%3A09IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-elsevier_hal_p&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Algorithmic%20identification%20of%20probabilities%20is%20hard&rft.jtitle=Journal%20of%20computer%20and%20system%20sciences&rft.au=Bienvenu,%20Laurent&rft.date=2018-08-01&rft.volume=95&rft.spage=98&rft.epage=108&rft.pages=98-108&rft.issn=0022-0000&rft.eissn=1090-2724&rft_id=info:doi/10.1016/j.jcss.2018.01.002&rft_dat=%3Celsevier_hal_p%3ES0022000018301193%3C/elsevier_hal_p%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_els_id=S0022000018301193&rfr_iscdi=true |