Learning unfolded networks with a cyclic group structure
Deep neural networks lack straightforward ways to incorporate domain knowledge and are notoriously considered black boxes. Prior works attempted to inject domain knowledge into architectures implicitly through data augmentation. Building on recent advances on equivariant neural networks, we propose...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Deep neural networks lack straightforward ways to incorporate domain
knowledge and are notoriously considered black boxes. Prior works attempted to
inject domain knowledge into architectures implicitly through data
augmentation. Building on recent advances on equivariant neural networks, we
propose networks that explicitly encode domain knowledge, specifically
equivariance with respect to rotations. By using unfolded architectures, a rich
framework that originated from sparse coding and has theoretical guarantees, we
present interpretable networks with sparse activations. The equivariant
unfolded networks compete favorably with baselines, with only a fraction of
their parameters, as showcased on (rotated) MNIST and CIFAR-10. |
---|---|
DOI: | 10.48550/arxiv.2211.09238 |