Generate Your Counterfactuals: Towards Controlled Counterfactual Generation for Text
Machine Learning has seen tremendous growth recently, which has led to larger adoption of ML systems for educational assessments, credit risk, healthcare, employment, criminal justice, to name a few. The trustworthiness of ML and NLP systems is a crucial aspect and requires a guarantee that the deci...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Machine Learning has seen tremendous growth recently, which has led to larger
adoption of ML systems for educational assessments, credit risk, healthcare,
employment, criminal justice, to name a few. The trustworthiness of ML and NLP
systems is a crucial aspect and requires a guarantee that the decisions they
make are fair and robust. Aligned with this, we propose a framework GYC, to
generate a set of counterfactual text samples, which are crucial for testing
these ML systems. Our main contributions include a) We introduce GYC, a
framework to generate counterfactual samples such that the generation is
plausible, diverse, goal-oriented, and effective, b) We generate counterfactual
samples, that can direct the generation towards a corresponding condition such
as named-entity tag, semantic role label, or sentiment. Our experimental
results on various domains show that GYC generates counterfactual text samples
exhibiting the above four properties. GYC generates counterfactuals that can
act as test cases to evaluate a model and any text debiasing algorithm. |
---|---|
DOI: | 10.48550/arxiv.2012.04698 |