Inferring Inference
Patterns of microcircuitry suggest that the brain has an array of repeated canonical computational units. Yet neural representations are distributed, so the relevant computations may only be related indirectly to single-neuron transformations. It thus remains an open challenge how to define canonica...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Patterns of microcircuitry suggest that the brain has an array of repeated
canonical computational units. Yet neural representations are distributed, so
the relevant computations may only be related indirectly to single-neuron
transformations. It thus remains an open challenge how to define canonical
distributed computations. We integrate normative and algorithmic theories of
neural computation into a mathematical framework for inferring canonical
distributed computations from large-scale neural activity patterns. At the
normative level, we hypothesize that the brain creates a structured internal
model of its environment, positing latent causes that explain its sensory
inputs, and uses those sensory inputs to infer the latent causes. At the
algorithmic level, we propose that this inference process is a nonlinear
message-passing algorithm on a graph-structured model of the world. Given a
time series of neural activity during a perceptual inference task, our
framework finds (i) the neural representation of relevant latent variables,
(ii) interactions between these variables that define the brain's internal
model of the world, and (iii) message-functions specifying the inference
algorithm. These targeted computational properties are then statistically
distinguishable due to the symmetries inherent in any canonical computation, up
to a global transformation. As a demonstration, we simulate recordings for a
model brain that implicitly implements an approximate inference algorithm on a
probabilistic graphical model. Given its external inputs and noisy neural
activity, we recover the latent variables, their neural representation and
dynamics, and canonical message-functions. We highlight features of
experimental design needed to successfully extract canonical computations from
neural data. Overall, this framework provides a new tool for discovering
interpretable structure in neural recordings. |
---|---|
DOI: | 10.48550/arxiv.2310.03186 |