Towards Computationally Feasible Deep Active Learning
Active learning (AL) is a prominent technique for reducing the annotation effort required for training machine learning models. Deep learning offers a solution for several essential obstacles to deploying AL in practice but introduces many others. One of such problems is the excessive computational...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Active learning (AL) is a prominent technique for reducing the annotation
effort required for training machine learning models. Deep learning offers a
solution for several essential obstacles to deploying AL in practice but
introduces many others. One of such problems is the excessive computational
resources required to train an acquisition model and estimate its uncertainty
on instances in the unlabeled pool. We propose two techniques that tackle this
issue for text classification and tagging tasks, offering a substantial
reduction of AL iteration duration and the computational overhead introduced by
deep acquisition models in AL. We also demonstrate that our algorithm that
leverages pseudo-labeling and distilled models overcomes one of the essential
obstacles revealed previously in the literature. Namely, it was shown that due
to differences between an acquisition model used to select instances during AL
and a successor model trained on the labeled data, the benefits of AL can
diminish. We show that our algorithm, despite using a smaller and faster
acquisition model, is capable of training a more expressive successor model
with higher performance. |
---|---|
DOI: | 10.48550/arxiv.2205.03598 |