Efficient neural codes naturally emerge through gradient descent learning
Human sensory systems are more sensitive to common features in the environment than uncommon features. For example, small deviations from the more frequently encountered horizontal orientations can be more easily detected than small deviations from the less frequent diagonal ones. Here we find that...
Gespeichert in:
Veröffentlicht in: | Nature communications 2022-12, Vol.13 (1), p.7972-12, Article 7972 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Human sensory systems are more sensitive to common features in the environment than uncommon features. For example, small deviations from the more frequently encountered horizontal orientations can be more easily detected than small deviations from the less frequent diagonal ones. Here we find that artificial neural networks trained to recognize objects also have patterns of sensitivity that match the statistics of features in images. To interpret these findings, we show mathematically that learning with gradient descent in neural networks preferentially creates representations that are more sensitive to common features, a hallmark of efficient coding. This effect occurs in systems with otherwise unconstrained coding resources, and additionally when learning towards both supervised and unsupervised objectives. This result demonstrates that efficient codes can naturally emerge from gradient-like learning.
In animals, sensory systems appear optimized for the statistics of the external world. Here the authors take an artificial psychophysics approach, analysing sensory responses in artificial neural networks, and show why these demonstrate the same phenomenon as natural sensory systems. |
---|---|
ISSN: | 2041-1723 2041-1723 |
DOI: | 10.1038/s41467-022-35659-7 |