Gaussian Loss Smoothing Enables Certified Training with Tight Convex Relaxations
Training neural networks with high certified accuracy against adversarial examples remains an open challenge despite significant efforts. While certification methods can effectively leverage tight convex relaxations for bound computation, in training, these methods, perhaps surprisingly, can perform...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Training neural networks with high certified accuracy against adversarial
examples remains an open challenge despite significant efforts. While
certification methods can effectively leverage tight convex relaxations for
bound computation, in training, these methods, perhaps surprisingly, can
perform worse than looser relaxations. Prior work hypothesized that this
phenomenon is caused by the discontinuity, non-smoothness, and perturbation
sensitivity of the loss surface induced by tighter relaxations. In this work,
we theoretically show that Gaussian Loss Smoothing (GLS) can alleviate these
issues. We confirm this empirically by instantiating GLS with two variants: a
zeroth-order optimization algorithm, called PGPE, which allows training with
non-differentiable relaxations, and a first-order optimization algorithm,
called RGS, which requires gradients of the relaxation but is much more
efficient than PGPE. Extensive experiments show that when combined with tight
relaxations, these methods surpass state-of-the-art methods when training on
the same network architecture for many settings. Our results clearly
demonstrate the promise of Gaussian Loss Smoothing for training certifiably
robust neural networks and pave a path towards leveraging tighter relaxations
for certified training. |
---|---|
DOI: | 10.48550/arxiv.2403.07095 |