Learning the Truth From Only One Side of the Story

Learning under one-sided feedback (i.e., where we only observe the labels for examples we predicted positively on) is a fundamental problem in machine learning -- applications include lending and recommendation systems. Despite this, there has been surprisingly little progress made in ways to mitiga...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Jiang, Heinrich, Jiang, Qijia, Pacchiano, Aldo
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Jiang, Heinrich
Jiang, Qijia
Pacchiano, Aldo
description Learning under one-sided feedback (i.e., where we only observe the labels for examples we predicted positively on) is a fundamental problem in machine learning -- applications include lending and recommendation systems. Despite this, there has been surprisingly little progress made in ways to mitigate the effects of the sampling bias that arises. We focus on generalized linear models and show that without adjusting for this sampling bias, the model may converge suboptimally or even fail to converge to the optimal solution. We propose an adaptive approach that comes with theoretical guarantees and show that it outperforms several existing methods empirically. Our method leverages variance estimation techniques to efficiently learn under uncertainty, offering a more principled alternative compared to existing approaches.
doi_str_mv 10.48550/arxiv.2006.04858
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2006_04858</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2006_04858</sourcerecordid><originalsourceid>FETCH-LOGICAL-a678-b3137d01c0b3dfccb6ce14540adb0e3bb979d145d79d625ae21b3b51a3ea89643</originalsourceid><addsrcrecordid>eNotjs0KgkAUhWfTIqoHaNW8gHbHcUZdRvQHQovcyx3nWkJpTBb59pm1OR-cA4ePsbkAP4yVgiW6d_XyAwDtQ9_EYxakhK6u6jNvL8Qz92wvfOuaGz_W164P4qfKEm_KYT-1jeumbFTi9UGzPycs226y9d5Lj7vDepV6qKPYM1LIyIIowEhbFoXRBYlQhYDWAEljkiixfWF76EAhBcJIowRKwjjRoZywxe92kM7vrrqh6_KvfD7Iyw9AUj3u</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Learning the Truth From Only One Side of the Story</title><source>arXiv.org</source><creator>Jiang, Heinrich ; Jiang, Qijia ; Pacchiano, Aldo</creator><creatorcontrib>Jiang, Heinrich ; Jiang, Qijia ; Pacchiano, Aldo</creatorcontrib><description>Learning under one-sided feedback (i.e., where we only observe the labels for examples we predicted positively on) is a fundamental problem in machine learning -- applications include lending and recommendation systems. Despite this, there has been surprisingly little progress made in ways to mitigate the effects of the sampling bias that arises. We focus on generalized linear models and show that without adjusting for this sampling bias, the model may converge suboptimally or even fail to converge to the optimal solution. We propose an adaptive approach that comes with theoretical guarantees and show that it outperforms several existing methods empirically. Our method leverages variance estimation techniques to efficiently learn under uncertainty, offering a more principled alternative compared to existing approaches.</description><identifier>DOI: 10.48550/arxiv.2006.04858</identifier><language>eng</language><subject>Computer Science - Learning ; Statistics - Machine Learning</subject><creationdate>2020-06</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,781,886</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2006.04858$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2006.04858$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Jiang, Heinrich</creatorcontrib><creatorcontrib>Jiang, Qijia</creatorcontrib><creatorcontrib>Pacchiano, Aldo</creatorcontrib><title>Learning the Truth From Only One Side of the Story</title><description>Learning under one-sided feedback (i.e., where we only observe the labels for examples we predicted positively on) is a fundamental problem in machine learning -- applications include lending and recommendation systems. Despite this, there has been surprisingly little progress made in ways to mitigate the effects of the sampling bias that arises. We focus on generalized linear models and show that without adjusting for this sampling bias, the model may converge suboptimally or even fail to converge to the optimal solution. We propose an adaptive approach that comes with theoretical guarantees and show that it outperforms several existing methods empirically. Our method leverages variance estimation techniques to efficiently learn under uncertainty, offering a more principled alternative compared to existing approaches.</description><subject>Computer Science - Learning</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotjs0KgkAUhWfTIqoHaNW8gHbHcUZdRvQHQovcyx3nWkJpTBb59pm1OR-cA4ePsbkAP4yVgiW6d_XyAwDtQ9_EYxakhK6u6jNvL8Qz92wvfOuaGz_W164P4qfKEm_KYT-1jeumbFTi9UGzPycs226y9d5Lj7vDepV6qKPYM1LIyIIowEhbFoXRBYlQhYDWAEljkiixfWF76EAhBcJIowRKwjjRoZywxe92kM7vrrqh6_KvfD7Iyw9AUj3u</recordid><startdate>20200608</startdate><enddate>20200608</enddate><creator>Jiang, Heinrich</creator><creator>Jiang, Qijia</creator><creator>Pacchiano, Aldo</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20200608</creationdate><title>Learning the Truth From Only One Side of the Story</title><author>Jiang, Heinrich ; Jiang, Qijia ; Pacchiano, Aldo</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a678-b3137d01c0b3dfccb6ce14540adb0e3bb979d145d79d625ae21b3b51a3ea89643</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Computer Science - Learning</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Jiang, Heinrich</creatorcontrib><creatorcontrib>Jiang, Qijia</creatorcontrib><creatorcontrib>Pacchiano, Aldo</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Jiang, Heinrich</au><au>Jiang, Qijia</au><au>Pacchiano, Aldo</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Learning the Truth From Only One Side of the Story</atitle><date>2020-06-08</date><risdate>2020</risdate><abstract>Learning under one-sided feedback (i.e., where we only observe the labels for examples we predicted positively on) is a fundamental problem in machine learning -- applications include lending and recommendation systems. Despite this, there has been surprisingly little progress made in ways to mitigate the effects of the sampling bias that arises. We focus on generalized linear models and show that without adjusting for this sampling bias, the model may converge suboptimally or even fail to converge to the optimal solution. We propose an adaptive approach that comes with theoretical guarantees and show that it outperforms several existing methods empirically. Our method leverages variance estimation techniques to efficiently learn under uncertainty, offering a more principled alternative compared to existing approaches.</abstract><doi>10.48550/arxiv.2006.04858</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2006.04858
ispartof
issn
language eng
recordid cdi_arxiv_primary_2006_04858
source arXiv.org
subjects Computer Science - Learning
Statistics - Machine Learning
title Learning the Truth From Only One Side of the Story
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-18T12%3A10%3A03IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Learning%20the%20Truth%20From%20Only%20One%20Side%20of%20the%20Story&rft.au=Jiang,%20Heinrich&rft.date=2020-06-08&rft_id=info:doi/10.48550/arxiv.2006.04858&rft_dat=%3Carxiv_GOX%3E2006_04858%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true