Low-Shot Validation: Active Importance Sampling for Estimating Classifier Performance on Rare Categories
For machine learning models trained with limited labeled training data, validation stands to become the main bottleneck to reducing overall annotation costs. We propose a statistical validation algorithm that accurately estimates the F-score of binary classifiers for rare categories, where finding r...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | For machine learning models trained with limited labeled training data,
validation stands to become the main bottleneck to reducing overall annotation
costs. We propose a statistical validation algorithm that accurately estimates
the F-score of binary classifiers for rare categories, where finding relevant
examples to evaluate on is particularly challenging. Our key insight is that
simultaneous calibration and importance sampling enables accurate estimates
even in the low-sample regime (< 300 samples). Critically, we also derive an
accurate single-trial estimator of the variance of our method and demonstrate
that this estimator is empirically accurate at low sample counts, enabling a
practitioner to know how well they can trust a given low-sample estimate. When
validating state-of-the-art semi-supervised models on ImageNet and
iNaturalist2017, our method achieves the same estimates of model performance
with up to 10x fewer labels than competing approaches. In particular, we can
estimate model F1 scores with a variance of 0.005 using as few as 100 labels. |
---|---|
DOI: | 10.48550/arxiv.2109.05720 |