Resolving label uncertainty with implicit posterior models
We propose a method for jointly inferring labels across a collection of data samples, where each sample consists of an observation and a prior belief about the label. By implicitly assuming the existence of a generative model for which a differentiable predictor is the posterior, we derive a trainin...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We propose a method for jointly inferring labels across a collection of data
samples, where each sample consists of an observation and a prior belief about
the label. By implicitly assuming the existence of a generative model for which
a differentiable predictor is the posterior, we derive a training objective
that allows learning under weak beliefs. This formulation unifies various
machine learning settings; the weak beliefs can come in the form of noisy or
incomplete labels, likelihoods given by a different prediction mechanism on
auxiliary input, or common-sense priors reflecting knowledge about the
structure of the problem at hand. We demonstrate the proposed algorithms on
diverse problems: classification with negative training examples, learning from
rankings, weakly and self-supervised aerial imagery segmentation,
co-segmentation of video frames, and coarsely supervised text classification. |
---|---|
DOI: | 10.48550/arxiv.2202.14000 |