Self-supervised self-supervision by combining deep learning and probabilistic logic
Labeling training examples at scale is a perennial challenge in machine learning. Self-supervision methods compensate for the lack of direct supervision by leveraging prior knowledge to automatically generate noisy labeled examples. Deep probabilistic logic (DPL) is a unifying framework for self-sup...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Labeling training examples at scale is a perennial challenge in machine
learning. Self-supervision methods compensate for the lack of direct
supervision by leveraging prior knowledge to automatically generate noisy
labeled examples. Deep probabilistic logic (DPL) is a unifying framework for
self-supervised learning that represents unknown labels as latent variables and
incorporates diverse self-supervision using probabilistic logic to train a deep
neural network end-to-end using variational EM. While DPL is successful at
combining pre-specified self-supervision, manually crafting self-supervision to
attain high accuracy may still be tedious and challenging. In this paper, we
propose Self-Supervised Self-Supervision (S4), which adds to DPL the capability
to learn new self-supervision automatically. Starting from an initial "seed,"
S4 iteratively uses the deep neural network to propose new self supervision.
These are either added directly (a form of structured self-training) or
verified by a human expert (as in feature-based active learning). Experiments
show that S4 is able to automatically propose accurate self-supervision and can
often nearly match the accuracy of supervised methods with a tiny fraction of
the human effort. |
---|---|
DOI: | 10.48550/arxiv.2012.12474 |