Active Learning Over Multiple Domains in Natural Language Tasks
Studies of active learning traditionally assume the target and source data stem from a single domain. However, in realistic applications, practitioners often require active learning with multiple sources of out-of-distribution data, where it is unclear a priori which data sources will help or hurt t...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Studies of active learning traditionally assume the target and source data
stem from a single domain. However, in realistic applications, practitioners
often require active learning with multiple sources of out-of-distribution
data, where it is unclear a priori which data sources will help or hurt the
target domain. We survey a wide variety of techniques in active learning (AL),
domain shift detection (DS), and multi-domain sampling to examine this
challenging setting for question answering and sentiment analysis. We ask (1)
what family of methods are effective for this task? And, (2) what properties of
selected examples and domains achieve strong results? Among 18 acquisition
functions from 4 families of methods, we find H-Divergence methods, and
particularly our proposed variant DAL-E, yield effective results, averaging
2-3% improvements over the random baseline. We also show the importance of a
diverse allocation of domains, as well as room-for-improvement of existing
methods on both domain and example selection. Our findings yield the first
comprehensive analysis of both existing and novel methods for practitioners
faced with multi-domain active learning for natural language tasks. |
---|---|
DOI: | 10.48550/arxiv.2202.00254 |