Benchmarking of Query Strategies: Towards Future Deep Active Learning
In this study, we benchmark query strategies for deep actice learning~(DAL). DAL reduces annotation costs by annotating only high-quality samples selected by query strategies. Existing research has two main problems, that the experimental settings are not standardized, making the evaluation of exist...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this study, we benchmark query strategies for deep actice learning~(DAL).
DAL reduces annotation costs by annotating only high-quality samples selected
by query strategies. Existing research has two main problems, that the
experimental settings are not standardized, making the evaluation of existing
methods is difficult, and that most of experiments were conducted on the CIFAR
or MNIST datasets. Therefore, we develop standardized experimental settings for
DAL and investigate the effectiveness of various query strategies using six
datasets, including those that contain medical and visual inspection images. In
addition, since most current DAL approaches are model-based, we perform
verification experiments using fully-trained models for querying to investigate
the effectiveness of these approaches for the six datasets. Our code is
available at
\href{https://github.com/ia-gu/Benchmarking-of-Query-Strategies-Towards-Future-Deep-Active-Learning} |
---|---|
DOI: | 10.48550/arxiv.2312.05751 |