Predictive coding is a consequence of energy efficiency in recurrent neural networks

Predictive coding is a promising framework for understanding brain function. It postulates that the brain continuously inhibits predictable sensory input, ensuring preferential processing of surprising elements. A central aspect of this view is its hierarchical connectivity, involving recurrent mess...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Patterns (New York, N.Y.) N.Y.), 2022-12, Vol.3 (12), p.100639-100639, Article 100639
Hauptverfasser: Ali, Abdullahi, Ahmad, Nasir, de Groot, Elgar, Johannes van Gerven, Marcel Antonius, Kietzmann, Tim Christian
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Predictive coding is a promising framework for understanding brain function. It postulates that the brain continuously inhibits predictable sensory input, ensuring preferential processing of surprising elements. A central aspect of this view is its hierarchical connectivity, involving recurrent message passing between excitatory bottom-up signals and inhibitory top-down feedback. Here we use computational modeling to demonstrate that such architectural hardwiring is not necessary. Rather, predictive coding is shown to emerge as a consequence of energy efficiency. When training recurrent neural networks to minimize their energy consumption while operating in predictive environments, the networks self-organize into prediction and error units with appropriate inhibitory and excitatory interconnections and learn to inhibit predictable sensory input. Moving beyond the view of purely top-down-driven predictions, we demonstrate, via virtual lesioning experiments, that networks perform predictions on two timescales: fast lateral predictions among sensory units and slower prediction cycles that integrate evidence over time. [Display omitted] •Neural networks are optimized for energy efficiency in predictable environments•Trained networks exhibit hallmarks of predictive coding as an emergent phenomenon•Energy-efficient networks separate into subpopulations of prediction and error units•Lesion studies indicate two types of prediction: fast and slow In brain science and beyond, predictive coding has emerged as a ubiquitous framework for understanding sensory processing. It postulates that the brain continuously inhibits predictable sensory input, sparing computational resources for input that promises high information gain. Using artificial neural network models, we here ask whether hallmarks of predictive coding can arise from other, perhaps simpler principles. We report that predictive coding naturally emerges as a simple consequence of energy efficiency; networks trained to be efficient not only predict and inhibit upcoming sensory input but spontaneously separate into distinct populations of “error” and “prediction” units. Our results raise the intriguing question which other core computational principles of brain function may be understood as a result of physical constraints posed by the biological substrate and highlight the usefulness of bio-inspired, machine learning-powered neuroscience research. Connecting the computational principles of brain function to th
ISSN:2666-3899
2666-3899
DOI:10.1016/j.patter.2022.100639