Auxiliary Task Reweighting for Minimum-data Learning
Supervised learning requires a large amount of training data, limiting its application where labeled data is scarce. To compensate for data scarcity, one possible method is to utilize auxiliary tasks to provide additional supervision for the main task. Assigning and optimizing the importance weights...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Supervised learning requires a large amount of training data, limiting its
application where labeled data is scarce. To compensate for data scarcity, one
possible method is to utilize auxiliary tasks to provide additional supervision
for the main task. Assigning and optimizing the importance weights for
different auxiliary tasks remains an crucial and largely understudied research
question. In this work, we propose a method to automatically reweight auxiliary
tasks in order to reduce the data requirement on the main task. Specifically,
we formulate the weighted likelihood function of auxiliary tasks as a surrogate
prior for the main task. By adjusting the auxiliary task weights to minimize
the divergence between the surrogate prior and the true prior of the main task,
we obtain a more accurate prior estimation, achieving the goal of minimizing
the required amount of training data for the main task and avoiding a costly
grid search. In multiple experimental settings (e.g. semi-supervised learning,
multi-label classification), we demonstrate that our algorithm can effectively
utilize limited labeled data of the main task with the benefit of auxiliary
tasks compared with previous task reweighting methods. We also show that under
extreme cases with only a few extra examples (e.g. few-shot domain adaptation),
our algorithm results in significant improvement over the baseline. |
---|---|
DOI: | 10.48550/arxiv.2010.08244 |