End-to-End Weak Supervision

Thirty-fifth Conference on Neural Information Processing Systems (NeurIPS 2021) Aggregating multiple sources of weak supervision (WS) can ease the data-labeling bottleneck prevalent in many machine learning applications, by replacing the tedious manual collection of ground truth labels. Current stat...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Cachay, Salva Rühling, Boecking, Benedikt, Dubrawski, Artur
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Cachay, Salva Rühling
Boecking, Benedikt
Dubrawski, Artur
description Thirty-fifth Conference on Neural Information Processing Systems (NeurIPS 2021) Aggregating multiple sources of weak supervision (WS) can ease the data-labeling bottleneck prevalent in many machine learning applications, by replacing the tedious manual collection of ground truth labels. Current state of the art approaches that do not use any labeled training data, however, require two separate modeling steps: Learning a probabilistic latent variable model based on the WS sources -- making assumptions that rarely hold in practice -- followed by downstream model training. Importantly, the first step of modeling does not consider the performance of the downstream model. To address these caveats we propose an end-to-end approach for directly learning the downstream model by maximizing its agreement with probabilistic labels generated by reparameterizing previous probabilistic posteriors with a neural network. Our results show improved performance over prior work in terms of end model performance on downstream test sets, as well as in terms of improved robustness to dependencies among weak supervision sources.
doi_str_mv 10.48550/arxiv.2107.02233
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2107_02233</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2107_02233</sourcerecordid><originalsourceid>FETCH-LOGICAL-a673-c286319d5d8721eb731b22bbe309b1d1adaeacbfa07127935a5a58e8cbe5075c3</originalsourceid><addsrcrecordid>eNotzrsKwjAYhuEsDlK9AHHQG0hN8psmHaXUAxQcLDiWP02E4qGSatG7t1b5hnf7eAiZcBYutZRsgf5VtaHgTIVMCIAhmaY3Sx817TI_OjzPD8-7823VVPVtRAYnvDRu_G9A8nWaJ1ua7Te7ZJVRjBTQUugIeGyl1UpwZxRwI4QxDlhsuOVo0WFpTsgUFyoGid2006VxkilZQkBmv9teV9x9dUX_Lr7KolfCBwTJNd4</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>End-to-End Weak Supervision</title><source>arXiv.org</source><creator>Cachay, Salva Rühling ; Boecking, Benedikt ; Dubrawski, Artur</creator><creatorcontrib>Cachay, Salva Rühling ; Boecking, Benedikt ; Dubrawski, Artur</creatorcontrib><description>Thirty-fifth Conference on Neural Information Processing Systems (NeurIPS 2021) Aggregating multiple sources of weak supervision (WS) can ease the data-labeling bottleneck prevalent in many machine learning applications, by replacing the tedious manual collection of ground truth labels. Current state of the art approaches that do not use any labeled training data, however, require two separate modeling steps: Learning a probabilistic latent variable model based on the WS sources -- making assumptions that rarely hold in practice -- followed by downstream model training. Importantly, the first step of modeling does not consider the performance of the downstream model. To address these caveats we propose an end-to-end approach for directly learning the downstream model by maximizing its agreement with probabilistic labels generated by reparameterizing previous probabilistic posteriors with a neural network. Our results show improved performance over prior work in terms of end model performance on downstream test sets, as well as in terms of improved robustness to dependencies among weak supervision sources.</description><identifier>DOI: 10.48550/arxiv.2107.02233</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Learning ; Statistics - Machine Learning</subject><creationdate>2021-07</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2107.02233$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2107.02233$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Cachay, Salva Rühling</creatorcontrib><creatorcontrib>Boecking, Benedikt</creatorcontrib><creatorcontrib>Dubrawski, Artur</creatorcontrib><title>End-to-End Weak Supervision</title><description>Thirty-fifth Conference on Neural Information Processing Systems (NeurIPS 2021) Aggregating multiple sources of weak supervision (WS) can ease the data-labeling bottleneck prevalent in many machine learning applications, by replacing the tedious manual collection of ground truth labels. Current state of the art approaches that do not use any labeled training data, however, require two separate modeling steps: Learning a probabilistic latent variable model based on the WS sources -- making assumptions that rarely hold in practice -- followed by downstream model training. Importantly, the first step of modeling does not consider the performance of the downstream model. To address these caveats we propose an end-to-end approach for directly learning the downstream model by maximizing its agreement with probabilistic labels generated by reparameterizing previous probabilistic posteriors with a neural network. Our results show improved performance over prior work in terms of end model performance on downstream test sets, as well as in terms of improved robustness to dependencies among weak supervision sources.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Learning</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzrsKwjAYhuEsDlK9AHHQG0hN8psmHaXUAxQcLDiWP02E4qGSatG7t1b5hnf7eAiZcBYutZRsgf5VtaHgTIVMCIAhmaY3Sx817TI_OjzPD8-7823VVPVtRAYnvDRu_G9A8nWaJ1ua7Te7ZJVRjBTQUugIeGyl1UpwZxRwI4QxDlhsuOVo0WFpTsgUFyoGid2006VxkilZQkBmv9teV9x9dUX_Lr7KolfCBwTJNd4</recordid><startdate>20210705</startdate><enddate>20210705</enddate><creator>Cachay, Salva Rühling</creator><creator>Boecking, Benedikt</creator><creator>Dubrawski, Artur</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20210705</creationdate><title>End-to-End Weak Supervision</title><author>Cachay, Salva Rühling ; Boecking, Benedikt ; Dubrawski, Artur</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a673-c286319d5d8721eb731b22bbe309b1d1adaeacbfa07127935a5a58e8cbe5075c3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Learning</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Cachay, Salva Rühling</creatorcontrib><creatorcontrib>Boecking, Benedikt</creatorcontrib><creatorcontrib>Dubrawski, Artur</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Cachay, Salva Rühling</au><au>Boecking, Benedikt</au><au>Dubrawski, Artur</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>End-to-End Weak Supervision</atitle><date>2021-07-05</date><risdate>2021</risdate><abstract>Thirty-fifth Conference on Neural Information Processing Systems (NeurIPS 2021) Aggregating multiple sources of weak supervision (WS) can ease the data-labeling bottleneck prevalent in many machine learning applications, by replacing the tedious manual collection of ground truth labels. Current state of the art approaches that do not use any labeled training data, however, require two separate modeling steps: Learning a probabilistic latent variable model based on the WS sources -- making assumptions that rarely hold in practice -- followed by downstream model training. Importantly, the first step of modeling does not consider the performance of the downstream model. To address these caveats we propose an end-to-end approach for directly learning the downstream model by maximizing its agreement with probabilistic labels generated by reparameterizing previous probabilistic posteriors with a neural network. Our results show improved performance over prior work in terms of end model performance on downstream test sets, as well as in terms of improved robustness to dependencies among weak supervision sources.</abstract><doi>10.48550/arxiv.2107.02233</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2107.02233
ispartof
issn
language eng
recordid cdi_arxiv_primary_2107_02233
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Learning
Statistics - Machine Learning
title End-to-End Weak Supervision
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T06%3A35%3A34IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=End-to-End%20Weak%20Supervision&rft.au=Cachay,%20Salva%20R%C3%BChling&rft.date=2021-07-05&rft_id=info:doi/10.48550/arxiv.2107.02233&rft_dat=%3Carxiv_GOX%3E2107_02233%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true