Can Implicit Bias Imply Adversarial Robustness?
The implicit bias of gradient-based training algorithms has been considered mostly beneficial as it leads to trained networks that often generalize well. However, Frei et al. (2023) show that such implicit bias can harm adversarial robustness. Specifically, they show that if the data consists of clu...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The implicit bias of gradient-based training algorithms has been considered
mostly beneficial as it leads to trained networks that often generalize well.
However, Frei et al. (2023) show that such implicit bias can harm adversarial
robustness. Specifically, they show that if the data consists of clusters with
small inter-cluster correlation, a shallow (two-layer) ReLU network trained by
gradient flow generalizes well, but it is not robust to adversarial attacks of
small radius. Moreover, this phenomenon occurs despite the existence of a much
more robust classifier that can be explicitly constructed from a shallow
network. In this paper, we extend recent analyses of neuron alignment to show
that a shallow network with a polynomial ReLU activation (pReLU) trained by
gradient flow not only generalizes well but is also robust to adversarial
attacks. Our results highlight the importance of the interplay between data
structure and architecture design in the implicit bias and robustness of
trained networks. |
---|---|
DOI: | 10.48550/arxiv.2405.15942 |