Unlearning Backdoor Attacks for LLMs with Weak-to-Strong Knowledge Distillation
Parameter-efficient fine-tuning (PEFT) can bridge the gap between large language models (LLMs) and downstream tasks. However, PEFT has been proven vulnerable to malicious attacks. Research indicates that poisoned LLMs, even after PEFT, retain the capability to activate internalized backdoors when in...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Parameter-efficient fine-tuning (PEFT) can bridge the gap between large
language models (LLMs) and downstream tasks. However, PEFT has been proven
vulnerable to malicious attacks. Research indicates that poisoned LLMs, even
after PEFT, retain the capability to activate internalized backdoors when input
samples contain predefined triggers. In this paper, we introduce a novel
weak-to-strong unlearning algorithm to defend against backdoor attacks based on
feature alignment knowledge distillation, named W2SDefense. Specifically, we
first train a small-scale language model through full-parameter fine-tuning to
serve as the clean teacher model. Then, this teacher model guides the
large-scale poisoned student model in unlearning the backdoor, leveraging PEFT.
Theoretical analysis suggests that W2SDefense has the potential to enhance the
student model's ability to unlearn backdoor features, preventing the activation
of the backdoor. We conduct experiments on text classification tasks involving
three state-of-the-art language models and three different backdoor attack
algorithms. Our empirical results demonstrate the outstanding performance of
W2SDefense in defending against backdoor attacks without compromising model
performance. |
---|---|
DOI: | 10.48550/arxiv.2410.14425 |