Arabic Synonym BERT-based Adversarial Examples for Text Classification
Text classification systems have been proven vulnerable to adversarial text examples, modified versions of the original text examples that are often unnoticed by human eyes, yet can force text classification models to alter their classification. Often, research works quantifying the impact of advers...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Text classification systems have been proven vulnerable to adversarial text
examples, modified versions of the original text examples that are often
unnoticed by human eyes, yet can force text classification models to alter
their classification. Often, research works quantifying the impact of
adversarial text attacks have been applied only to models trained in English.
In this paper, we introduce the first word-level study of adversarial attacks
in Arabic. Specifically, we use a synonym (word-level) attack using a Masked
Language Modeling (MLM) task with a BERT model in a black-box setting to assess
the robustness of the state-of-the-art text classification models to
adversarial attacks in Arabic. To evaluate the grammatical and semantic
similarities of the newly produced adversarial examples using our synonym
BERT-based attack, we invite four human evaluators to assess and compare the
produced adversarial examples with their original examples. We also study the
transferability of these newly produced Arabic adversarial examples to various
models and investigate the effectiveness of defense mechanisms against these
adversarial examples on the BERT models. We find that fine-tuned BERT models
were more susceptible to our synonym attacks than the other Deep Neural
Networks (DNN) models like WordCNN and WordLSTM we trained. We also find that
fine-tuned BERT models were more susceptible to transferred attacks. We,
lastly, find that fine-tuned BERT models successfully regain at least 2% in
accuracy after applying adversarial training as an initial defense mechanism. |
---|---|
DOI: | 10.48550/arxiv.2402.03477 |