Different Spectral Representations in Optimized Artificial Neural Networks and Brains
Recent studies suggest that artificial neural networks (ANNs) that match the spectral properties of the mammalian visual cortex -- namely, the $\sim 1/n$ eigenspectrum of the covariance matrix of neural activities -- achieve higher object recognition performance and robustness to adversarial attacks...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recent studies suggest that artificial neural networks (ANNs) that match the
spectral properties of the mammalian visual cortex -- namely, the $\sim 1/n$
eigenspectrum of the covariance matrix of neural activities -- achieve higher
object recognition performance and robustness to adversarial attacks than those
that do not. To our knowledge, however, no previous work systematically
explored how modifying the ANN's spectral properties affects performance. To
fill this gap, we performed a systematic search over spectral regularizers,
forcing the ANN's eigenspectrum to follow $1/n^\alpha$ power laws with
different exponents $\alpha$. We found that larger powers (around 2--3) lead to
better validation accuracy and more robustness to adversarial attacks on dense
networks. This surprising finding applied to both shallow and deep networks and
it overturns the notion that the brain-like spectrum (corresponding to $\alpha
\sim 1$) always optimizes ANN performance and/or robustness. For convolutional
networks, the best $\alpha$ values depend on the task complexity and evaluation
metric: lower $\alpha$ values optimized validation accuracy and robustness to
adversarial attack for networks performing a simple object recognition task
(categorizing MNIST images of handwritten digits); for a more complex task
(categorizing CIFAR-10 natural images), we found that lower $\alpha$ values
optimized validation accuracy whereas higher $\alpha$ values optimized
adversarial robustness. These results have two main implications. First, they
cast doubt on the notion that brain-like spectral properties ($\alpha \sim 1$)
\emph{always} optimize ANN performance. Second, they demonstrate the potential
for fine-tuned spectral regularizers to optimize a chosen design metric, i.e.,
accuracy and/or robustness. |
---|---|
DOI: | 10.48550/arxiv.2208.10576 |