Poison Attacks against Text Datasets with Conditional Adversarially Regularized Autoencoder
This paper demonstrates a fatal vulnerability in natural language inference (NLI) and text classification systems. More concretely, we present a 'backdoor poisoning' attack on NLP models. Our poisoning attack utilizes conditional adversarially regularized autoencoder (CARA) to generate poi...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This paper demonstrates a fatal vulnerability in natural language inference
(NLI) and text classification systems. More concretely, we present a 'backdoor
poisoning' attack on NLP models. Our poisoning attack utilizes conditional
adversarially regularized autoencoder (CARA) to generate poisoned training
samples by poison injection in latent space. Just by adding 1% poisoned data,
our experiments show that a victim BERT finetuned classifier's predictions can
be steered to the poison target class with success rates of >80% when the input
hypothesis is injected with the poison signature, demonstrating that NLI and
text classification systems face a huge security risk. |
---|---|
DOI: | 10.48550/arxiv.2010.02684 |