Robustness Reprogramming for Representation Learning
This work tackles an intriguing and fundamental open challenge in representation learning: Given a well-trained deep learning model, can it be reprogrammed to enhance its robustness against adversarial or noisy input perturbations without altering its parameters? To explore this, we revisit the core...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This work tackles an intriguing and fundamental open challenge in
representation learning: Given a well-trained deep learning model, can it be
reprogrammed to enhance its robustness against adversarial or noisy input
perturbations without altering its parameters? To explore this, we revisit the
core feature transformation mechanism in representation learning and propose a
novel non-linear robust pattern matching technique as a robust alternative.
Furthermore, we introduce three model reprogramming paradigms to offer flexible
control of robustness under different efficiency requirements. Comprehensive
experiments and ablation studies across diverse learning models ranging from
basic linear model and MLPs to shallow and modern deep ConvNets demonstrate the
effectiveness of our approaches. This work not only opens a promising and
orthogonal direction for improving adversarial defenses in deep learning beyond
existing methods but also provides new insights into designing more resilient
AI systems with robust statistics. |
---|---|
DOI: | 10.48550/arxiv.2410.04577 |