Improving Classifier Robustness through Active Generation of Pairwise Counterfactuals
Counterfactual Data Augmentation (CDA) is a commonly used technique for improving robustness in natural language classifiers. However, one fundamental challenge is how to discover meaningful counterfactuals and efficiently label them, with minimal human labeling cost. Most existing methods either co...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Counterfactual Data Augmentation (CDA) is a commonly used technique for
improving robustness in natural language classifiers. However, one fundamental
challenge is how to discover meaningful counterfactuals and efficiently label
them, with minimal human labeling cost. Most existing methods either completely
rely on human-annotated labels, an expensive process which limits the scale of
counterfactual data, or implicitly assume label invariance, which may mislead
the model with incorrect labels. In this paper, we present a novel framework
that utilizes counterfactual generative models to generate a large number of
diverse counterfactuals by actively sampling from regions of uncertainty, and
then automatically label them with a learned pairwise classifier. Our key
insight is that we can more correctly label the generated counterfactuals by
training a pairwise classifier that interpolates the relationship between the
original example and the counterfactual. We demonstrate that with a small
amount of human-annotated counterfactual data (10%), we can generate a
counterfactual augmentation dataset with learned labels, that provides an
18-20% improvement in robustness and a 14-21% reduction in errors on 6
out-of-domain datasets, comparable to that of a fully human-annotated
counterfactual dataset for both sentiment classification and question
paraphrase tasks. |
---|---|
DOI: | 10.48550/arxiv.2305.13535 |