A bagging SVM to learn from positive and unlabeled examples

We consider the problem of learning a binary classifier from a training set of positive and unlabeled examples, both in the inductive and in the transductive setting. This problem, often referred to as \emph{PU learning}, differs from the standard supervised classification problem by the lack of neg...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Mordelet, Fantine, Vert, Jean-Philippe
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Mordelet, Fantine
Vert, Jean-Philippe
description We consider the problem of learning a binary classifier from a training set of positive and unlabeled examples, both in the inductive and in the transductive setting. This problem, often referred to as \emph{PU learning}, differs from the standard supervised classification problem by the lack of negative examples in the training set. It corresponds to an ubiquitous situation in many applications such as information retrieval or gene ranking, when we have identified a set of data of interest sharing a particular property, and we wish to automatically retrieve additional data sharing the same property among a large and easily available pool of unlabeled data. We propose a conceptually simple method, akin to bagging, to approach both inductive and transductive PU learning problems, by converting them into series of supervised binary classification problems discriminating the known positive examples from random subsamples of the unlabeled set. We empirically demonstrate the relevance of the method on simulated and real data, where it performs at least as well as existing methods while being faster.
doi_str_mv 10.48550/arxiv.1010.0772
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1010_0772</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1010_0772</sourcerecordid><originalsourceid>FETCH-arxiv_primary_1010_07723</originalsourceid><addsrcrecordid>eNpjYJAwNNAzsTA1NdBPLKrILNMzNAAKGJibG3EyWDsqJCWmp2fmpSsEh_kqlOQr5KQmFuUppBXl5yoU5BdnlmSWpSok5qUolOblJCal5qSmKKRWJOYW5KQW8zCwpiXmFKfyQmluBjk31xBnD12wLfEFRZm5iUWV8SDb4kG2GRNUAADdajOS</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>A bagging SVM to learn from positive and unlabeled examples</title><source>arXiv.org</source><creator>Mordelet, Fantine ; Vert, Jean-Philippe</creator><creatorcontrib>Mordelet, Fantine ; Vert, Jean-Philippe</creatorcontrib><description>We consider the problem of learning a binary classifier from a training set of positive and unlabeled examples, both in the inductive and in the transductive setting. This problem, often referred to as \emph{PU learning}, differs from the standard supervised classification problem by the lack of negative examples in the training set. It corresponds to an ubiquitous situation in many applications such as information retrieval or gene ranking, when we have identified a set of data of interest sharing a particular property, and we wish to automatically retrieve additional data sharing the same property among a large and easily available pool of unlabeled data. We propose a conceptually simple method, akin to bagging, to approach both inductive and transductive PU learning problems, by converting them into series of supervised binary classification problems discriminating the known positive examples from random subsamples of the unlabeled set. We empirically demonstrate the relevance of the method on simulated and real data, where it performs at least as well as existing methods while being faster.</description><identifier>DOI: 10.48550/arxiv.1010.0772</identifier><language>eng</language><subject>Statistics - Machine Learning</subject><creationdate>2010-10</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1010.0772$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1010.0772$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Mordelet, Fantine</creatorcontrib><creatorcontrib>Vert, Jean-Philippe</creatorcontrib><title>A bagging SVM to learn from positive and unlabeled examples</title><description>We consider the problem of learning a binary classifier from a training set of positive and unlabeled examples, both in the inductive and in the transductive setting. This problem, often referred to as \emph{PU learning}, differs from the standard supervised classification problem by the lack of negative examples in the training set. It corresponds to an ubiquitous situation in many applications such as information retrieval or gene ranking, when we have identified a set of data of interest sharing a particular property, and we wish to automatically retrieve additional data sharing the same property among a large and easily available pool of unlabeled data. We propose a conceptually simple method, akin to bagging, to approach both inductive and transductive PU learning problems, by converting them into series of supervised binary classification problems discriminating the known positive examples from random subsamples of the unlabeled set. We empirically demonstrate the relevance of the method on simulated and real data, where it performs at least as well as existing methods while being faster.</description><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2010</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJAwNNAzsTA1NdBPLKrILNMzNAAKGJibG3EyWDsqJCWmp2fmpSsEh_kqlOQr5KQmFuUppBXl5yoU5BdnlmSWpSok5qUolOblJCal5qSmKKRWJOYW5KQW8zCwpiXmFKfyQmluBjk31xBnD12wLfEFRZm5iUWV8SDb4kG2GRNUAADdajOS</recordid><startdate>20101005</startdate><enddate>20101005</enddate><creator>Mordelet, Fantine</creator><creator>Vert, Jean-Philippe</creator><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20101005</creationdate><title>A bagging SVM to learn from positive and unlabeled examples</title><author>Mordelet, Fantine ; Vert, Jean-Philippe</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_1010_07723</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2010</creationdate><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Mordelet, Fantine</creatorcontrib><creatorcontrib>Vert, Jean-Philippe</creatorcontrib><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Mordelet, Fantine</au><au>Vert, Jean-Philippe</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A bagging SVM to learn from positive and unlabeled examples</atitle><date>2010-10-05</date><risdate>2010</risdate><abstract>We consider the problem of learning a binary classifier from a training set of positive and unlabeled examples, both in the inductive and in the transductive setting. This problem, often referred to as \emph{PU learning}, differs from the standard supervised classification problem by the lack of negative examples in the training set. It corresponds to an ubiquitous situation in many applications such as information retrieval or gene ranking, when we have identified a set of data of interest sharing a particular property, and we wish to automatically retrieve additional data sharing the same property among a large and easily available pool of unlabeled data. We propose a conceptually simple method, akin to bagging, to approach both inductive and transductive PU learning problems, by converting them into series of supervised binary classification problems discriminating the known positive examples from random subsamples of the unlabeled set. We empirically demonstrate the relevance of the method on simulated and real data, where it performs at least as well as existing methods while being faster.</abstract><doi>10.48550/arxiv.1010.0772</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.1010.0772
ispartof
issn
language eng
recordid cdi_arxiv_primary_1010_0772
source arXiv.org
subjects Statistics - Machine Learning
title A bagging SVM to learn from positive and unlabeled examples
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-26T23%3A44%3A25IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20bagging%20SVM%20to%20learn%20from%20positive%20and%20unlabeled%20examples&rft.au=Mordelet,%20Fantine&rft.date=2010-10-05&rft_id=info:doi/10.48550/arxiv.1010.0772&rft_dat=%3Carxiv_GOX%3E1010_0772%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true