Searching for an Effective Defender: Benchmarking Defense against Adversarial Word Substitution

Recent studies have shown that deep neural networks are vulnerable to intentionally crafted adversarial examples, and various methods have been proposed to defend against adversarial word-substitution attacks for neural NLP models. However, there is a lack of systematic study on comparing different...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Li, Zongyi, Xu, Jianhan, Zeng, Jiehang, Li, Linyang, Zheng, Xiaoqing, Zhang, Qi, Chang, Kai-Wei, Hsieh, Cho-Jui
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!