Bayes optimal instance-based learning

In this paper we present a probabilistic formalization of the instance-based learning approach. In our Bayesian framework, moving from the construction of an explicit hypothesis to a data-driven instancebased learning approach, is equivalent to averaging over all the (possibly infinitely many) indiv...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Kontkanen, Petri, Myllymdki, Petri, Silander, Tomi, Tirri, Henry
Format: Buchkapitel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 88
container_issue
container_start_page 77
container_title
container_volume
creator Kontkanen, Petri
Myllymdki, Petri
Silander, Tomi
Tirri, Henry
description In this paper we present a probabilistic formalization of the instance-based learning approach. In our Bayesian framework, moving from the construction of an explicit hypothesis to a data-driven instancebased learning approach, is equivalent to averaging over all the (possibly infinitely many) individual models. The general Bayesian instance-based learning framework described in this paper can be applied with any set of assumptions defining a parametric model family, and to any discrete prediction task where the number of simultaneously predicted attributes is small, which includes for example all classification tasks prevalent in the machine learning literature. To illustrate the use of the suggested general framework in practice, we show how the approach can be implemented in the special case with the strong independence assumptions underlying the so called Naive Bayes classifier. The resulting Bayesian instance-based classifier is validated empirically with public domain data sets and the results are compared to the performance of the traditional Naive Bayes classifier. The results suggest that the Bayesian instancebased learning approach yields better results than the traditional Naive Bayes classifier, especially in cases where the amount of the training data is small.
doi_str_mv 10.1007/BFb0026675
format Book Chapter
fullrecord <record><control><sourceid>pascalfrancis_sprin</sourceid><recordid>TN_cdi_pascalfrancis_primary_2042285</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2042285</sourcerecordid><originalsourceid>FETCH-LOGICAL-p1685-4260386d82a62516800ae03af711b626d0c08329477356b89cf8eea110ea98fc3</originalsourceid><addsrcrecordid>eNpFUE1Lw0AUXL_AWHPxF-Sg4GX1vf3O0RarQsGLnpeXZFOiMQnZXvrvXa3gXAZmhmEYxq4Q7hDA3i_XFYAwxuojdiG1AlNah3DMMjSIXEpVnrA8ab-eUmjhlGUgQfDSKnnO8hg_IEEKpZ3N2M2S9iEW47TrvqgvuiHuaKgDryiGpugDzUM3bC_ZWUt9DPkfL9j7-vFt9cw3r08vq4cNn9A4zZUwIJ1pnCAjdJIAKICk1iJWRpgGanBSlMpaqU3lyrp1IRAiBCpdW8sFuz70ThRr6ts5bemin-Y0bt57AUoIp1Ps9hCLyRm2YfbVOH5Gj-B_XvL_L8lvCAVRlw</addsrcrecordid><sourcetype>Index Database</sourcetype><iscdi>true</iscdi><recordtype>book_chapter</recordtype></control><display><type>book_chapter</type><title>Bayes optimal instance-based learning</title><source>Springer Books</source><creator>Kontkanen, Petri ; Myllymdki, Petri ; Silander, Tomi ; Tirri, Henry</creator><contributor>Nédellec, Claire ; Rouveirol, Céline</contributor><creatorcontrib>Kontkanen, Petri ; Myllymdki, Petri ; Silander, Tomi ; Tirri, Henry ; Nédellec, Claire ; Rouveirol, Céline</creatorcontrib><description>In this paper we present a probabilistic formalization of the instance-based learning approach. In our Bayesian framework, moving from the construction of an explicit hypothesis to a data-driven instancebased learning approach, is equivalent to averaging over all the (possibly infinitely many) individual models. The general Bayesian instance-based learning framework described in this paper can be applied with any set of assumptions defining a parametric model family, and to any discrete prediction task where the number of simultaneously predicted attributes is small, which includes for example all classification tasks prevalent in the machine learning literature. To illustrate the use of the suggested general framework in practice, we show how the approach can be implemented in the special case with the strong independence assumptions underlying the so called Naive Bayes classifier. The resulting Bayesian instance-based classifier is validated empirically with public domain data sets and the results are compared to the performance of the traditional Naive Bayes classifier. The results suggest that the Bayesian instancebased learning approach yields better results than the traditional Naive Bayes classifier, especially in cases where the amount of the training data is small.</description><identifier>ISSN: 0302-9743</identifier><identifier>ISBN: 9783540644170</identifier><identifier>ISBN: 3540644172</identifier><identifier>EISSN: 1611-3349</identifier><identifier>EISBN: 3540697810</identifier><identifier>EISBN: 9783540697817</identifier><identifier>DOI: 10.1007/BFb0026675</identifier><language>eng</language><publisher>Berlin, Heidelberg: Springer Berlin Heidelberg</publisher><subject>Applied sciences ; Bayesian Network ; Exact sciences and technology ; Feedforward Neural Network Model ; Information theory ; Information, signal and communications theory ; Model Family ; Predictive Distribution ; Sixth International Workshop ; Telecommunications and information theory</subject><ispartof>Lecture notes in computer science, 2005, p.77-88</ispartof><rights>Springer-Verlag Berlin Heidelberg 1998</rights><rights>1998 INIST-CNRS</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><relation>Lecture Notes in Computer Science</relation></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/BFb0026675$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/BFb0026675$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>309,310,775,776,780,785,786,789,4036,4037,27902,38232,41418,42487</link.rule.ids><backlink>$$Uhttp://pascal-francis.inist.fr/vibad/index.php?action=getRecordDetail&amp;idt=2042285$$DView record in Pascal Francis$$Hfree_for_read</backlink></links><search><contributor>Nédellec, Claire</contributor><contributor>Rouveirol, Céline</contributor><creatorcontrib>Kontkanen, Petri</creatorcontrib><creatorcontrib>Myllymdki, Petri</creatorcontrib><creatorcontrib>Silander, Tomi</creatorcontrib><creatorcontrib>Tirri, Henry</creatorcontrib><title>Bayes optimal instance-based learning</title><title>Lecture notes in computer science</title><description>In this paper we present a probabilistic formalization of the instance-based learning approach. In our Bayesian framework, moving from the construction of an explicit hypothesis to a data-driven instancebased learning approach, is equivalent to averaging over all the (possibly infinitely many) individual models. The general Bayesian instance-based learning framework described in this paper can be applied with any set of assumptions defining a parametric model family, and to any discrete prediction task where the number of simultaneously predicted attributes is small, which includes for example all classification tasks prevalent in the machine learning literature. To illustrate the use of the suggested general framework in practice, we show how the approach can be implemented in the special case with the strong independence assumptions underlying the so called Naive Bayes classifier. The resulting Bayesian instance-based classifier is validated empirically with public domain data sets and the results are compared to the performance of the traditional Naive Bayes classifier. The results suggest that the Bayesian instancebased learning approach yields better results than the traditional Naive Bayes classifier, especially in cases where the amount of the training data is small.</description><subject>Applied sciences</subject><subject>Bayesian Network</subject><subject>Exact sciences and technology</subject><subject>Feedforward Neural Network Model</subject><subject>Information theory</subject><subject>Information, signal and communications theory</subject><subject>Model Family</subject><subject>Predictive Distribution</subject><subject>Sixth International Workshop</subject><subject>Telecommunications and information theory</subject><issn>0302-9743</issn><issn>1611-3349</issn><isbn>9783540644170</isbn><isbn>3540644172</isbn><isbn>3540697810</isbn><isbn>9783540697817</isbn><fulltext>true</fulltext><rsrctype>book_chapter</rsrctype><creationdate>2005</creationdate><recordtype>book_chapter</recordtype><recordid>eNpFUE1Lw0AUXL_AWHPxF-Sg4GX1vf3O0RarQsGLnpeXZFOiMQnZXvrvXa3gXAZmhmEYxq4Q7hDA3i_XFYAwxuojdiG1AlNah3DMMjSIXEpVnrA8ab-eUmjhlGUgQfDSKnnO8hg_IEEKpZ3N2M2S9iEW47TrvqgvuiHuaKgDryiGpugDzUM3bC_ZWUt9DPkfL9j7-vFt9cw3r08vq4cNn9A4zZUwIJ1pnCAjdJIAKICk1iJWRpgGanBSlMpaqU3lyrp1IRAiBCpdW8sFuz70ThRr6ts5bemin-Y0bt57AUoIp1Ps9hCLyRm2YfbVOH5Gj-B_XvL_L8lvCAVRlw</recordid><startdate>20050616</startdate><enddate>20050616</enddate><creator>Kontkanen, Petri</creator><creator>Myllymdki, Petri</creator><creator>Silander, Tomi</creator><creator>Tirri, Henry</creator><general>Springer Berlin Heidelberg</general><general>Springer-Verlag</general><scope>IQODW</scope></search><sort><creationdate>20050616</creationdate><title>Bayes optimal instance-based learning</title><author>Kontkanen, Petri ; Myllymdki, Petri ; Silander, Tomi ; Tirri, Henry</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-p1685-4260386d82a62516800ae03af711b626d0c08329477356b89cf8eea110ea98fc3</frbrgroupid><rsrctype>book_chapters</rsrctype><prefilter>book_chapters</prefilter><language>eng</language><creationdate>2005</creationdate><topic>Applied sciences</topic><topic>Bayesian Network</topic><topic>Exact sciences and technology</topic><topic>Feedforward Neural Network Model</topic><topic>Information theory</topic><topic>Information, signal and communications theory</topic><topic>Model Family</topic><topic>Predictive Distribution</topic><topic>Sixth International Workshop</topic><topic>Telecommunications and information theory</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Kontkanen, Petri</creatorcontrib><creatorcontrib>Myllymdki, Petri</creatorcontrib><creatorcontrib>Silander, Tomi</creatorcontrib><creatorcontrib>Tirri, Henry</creatorcontrib><collection>Pascal-Francis</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Kontkanen, Petri</au><au>Myllymdki, Petri</au><au>Silander, Tomi</au><au>Tirri, Henry</au><au>Nédellec, Claire</au><au>Rouveirol, Céline</au><format>book</format><genre>bookitem</genre><ristype>CHAP</ristype><atitle>Bayes optimal instance-based learning</atitle><btitle>Lecture notes in computer science</btitle><seriestitle>Lecture Notes in Computer Science</seriestitle><date>2005-06-16</date><risdate>2005</risdate><spage>77</spage><epage>88</epage><pages>77-88</pages><issn>0302-9743</issn><eissn>1611-3349</eissn><isbn>9783540644170</isbn><isbn>3540644172</isbn><eisbn>3540697810</eisbn><eisbn>9783540697817</eisbn><abstract>In this paper we present a probabilistic formalization of the instance-based learning approach. In our Bayesian framework, moving from the construction of an explicit hypothesis to a data-driven instancebased learning approach, is equivalent to averaging over all the (possibly infinitely many) individual models. The general Bayesian instance-based learning framework described in this paper can be applied with any set of assumptions defining a parametric model family, and to any discrete prediction task where the number of simultaneously predicted attributes is small, which includes for example all classification tasks prevalent in the machine learning literature. To illustrate the use of the suggested general framework in practice, we show how the approach can be implemented in the special case with the strong independence assumptions underlying the so called Naive Bayes classifier. The resulting Bayesian instance-based classifier is validated empirically with public domain data sets and the results are compared to the performance of the traditional Naive Bayes classifier. The results suggest that the Bayesian instancebased learning approach yields better results than the traditional Naive Bayes classifier, especially in cases where the amount of the training data is small.</abstract><cop>Berlin, Heidelberg</cop><pub>Springer Berlin Heidelberg</pub><doi>10.1007/BFb0026675</doi><tpages>12</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0302-9743
ispartof Lecture notes in computer science, 2005, p.77-88
issn 0302-9743
1611-3349
language eng
recordid cdi_pascalfrancis_primary_2042285
source Springer Books
subjects Applied sciences
Bayesian Network
Exact sciences and technology
Feedforward Neural Network Model
Information theory
Information, signal and communications theory
Model Family
Predictive Distribution
Sixth International Workshop
Telecommunications and information theory
title Bayes optimal instance-based learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-01T15%3A12%3A49IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-pascalfrancis_sprin&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=bookitem&rft.atitle=Bayes%20optimal%20instance-based%20learning&rft.btitle=Lecture%20notes%20in%20computer%20science&rft.au=Kontkanen,%20Petri&rft.date=2005-06-16&rft.spage=77&rft.epage=88&rft.pages=77-88&rft.issn=0302-9743&rft.eissn=1611-3349&rft.isbn=9783540644170&rft.isbn_list=3540644172&rft_id=info:doi/10.1007/BFb0026675&rft_dat=%3Cpascalfrancis_sprin%3E2042285%3C/pascalfrancis_sprin%3E%3Curl%3E%3C/url%3E&rft.eisbn=3540697810&rft.eisbn_list=9783540697817&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true