Data Poisoning Attack Aiming the Vulnerability of Continual Learning
Generally, regularization-based continual learning models limit access to the previous task data to imitate the real-world constraints related to memory and privacy. However, this introduces a problem in these models by not being able to track the performance on each task. In essence, current contin...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Generally, regularization-based continual learning models limit access to the
previous task data to imitate the real-world constraints related to memory and
privacy. However, this introduces a problem in these models by not being able
to track the performance on each task. In essence, current continual learning
methods are susceptible to attacks on previous tasks. We demonstrate the
vulnerability of regularization-based continual learning methods by presenting
a simple task-specific data poisoning attack that can be used in the learning
process of a new task. Training data generated by the proposed attack causes
performance degradation on a specific task targeted by the attacker. We
experiment with the attack on the two representative regularization-based
continual learning methods, Elastic Weight Consolidation (EWC) and Synaptic
Intelligence (SI), trained with variants of MNIST dataset. The experiment
results justify the vulnerability proposed in this paper and demonstrate the
importance of developing continual learning models that are robust to
adversarial attacks. |
---|---|
DOI: | 10.48550/arxiv.2211.15875 |