Continual Forgetting for Pre-trained Vision Models
For privacy and security concerns, the need to erase unwanted information from pre-trained vision models is becoming evident nowadays. In real-world scenarios, erasure requests originate at any time from both users and model owners. These requests usually form a sequence. Therefore, under such a set...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | For privacy and security concerns, the need to erase unwanted information
from pre-trained vision models is becoming evident nowadays. In real-world
scenarios, erasure requests originate at any time from both users and model
owners. These requests usually form a sequence. Therefore, under such a
setting, selective information is expected to be continuously removed from a
pre-trained model while maintaining the rest. We define this problem as
continual forgetting and identify two key challenges. (i) For unwanted
knowledge, efficient and effective deleting is crucial. (ii) For remaining
knowledge, the impact brought by the forgetting procedure should be minimal. To
address them, we propose Group Sparse LoRA (GS-LoRA). Specifically, towards
(i), we use LoRA modules to fine-tune the FFN layers in Transformer blocks for
each forgetting task independently, and towards (ii), a simple group sparse
regularization is adopted, enabling automatic selection of specific LoRA groups
and zeroing out the others. GS-LoRA is effective, parameter-efficient,
data-efficient, and easy to implement. We conduct extensive experiments on face
recognition, object detection and image classification and demonstrate that
GS-LoRA manages to forget specific classes with minimal impact on other
classes. Codes will be released on \url{https://github.com/bjzhb666/GS-LoRA}. |
---|---|
DOI: | 10.48550/arxiv.2403.11530 |