End-to-End Weak Supervision
Thirty-fifth Conference on Neural Information Processing Systems (NeurIPS 2021) Aggregating multiple sources of weak supervision (WS) can ease the data-labeling bottleneck prevalent in many machine learning applications, by replacing the tedious manual collection of ground truth labels. Current stat...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Thirty-fifth Conference on Neural Information Processing Systems
(NeurIPS 2021) Aggregating multiple sources of weak supervision (WS) can ease the
data-labeling bottleneck prevalent in many machine learning applications, by
replacing the tedious manual collection of ground truth labels. Current state
of the art approaches that do not use any labeled training data, however,
require two separate modeling steps: Learning a probabilistic latent variable
model based on the WS sources -- making assumptions that rarely hold in
practice -- followed by downstream model training. Importantly, the first step
of modeling does not consider the performance of the downstream model. To
address these caveats we propose an end-to-end approach for directly learning
the downstream model by maximizing its agreement with probabilistic labels
generated by reparameterizing previous probabilistic posteriors with a neural
network. Our results show improved performance over prior work in terms of end
model performance on downstream test sets, as well as in terms of improved
robustness to dependencies among weak supervision sources. |
---|---|
DOI: | 10.48550/arxiv.2107.02233 |