On Adversarial Examples for Text Classification by Perturbing Latent Representations
Recently, with the advancement of deep learning, several applications in text classification have advanced significantly. However, this improvement comes with a cost because deep learning is vulnerable to adversarial examples. This weakness indicates that deep learning is not very robust. Fortunatel...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recently, with the advancement of deep learning, several applications in text
classification have advanced significantly. However, this improvement comes
with a cost because deep learning is vulnerable to adversarial examples. This
weakness indicates that deep learning is not very robust. Fortunately, the
input of a text classifier is discrete. Hence, it can prevent the classifier
from state-of-the-art attacks. Nonetheless, previous works have generated
black-box attacks that successfully manipulate the discrete values of the input
to find adversarial examples. Therefore, instead of changing the discrete
values, we transform the input into its embedding vector containing real values
to perform the state-of-the-art white-box attacks. Then, we convert the
perturbed embedding vector back into a text and name it an adversarial example.
In summary, we create a framework that measures the robustness of a text
classifier by using the gradients of the classifier. |
---|---|
DOI: | 10.48550/arxiv.2405.03789 |