A biologically inspired neurocomputational model for audiovisual integration and causal inference

Recently, experimental and theoretical research has focused on the brain's abilities to extract information from a noisy sensory environment and how cross‐modal inputs are processed to solve the causal inference problem to provide the best estimate of external events. Despite the empirical evid...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:The European journal of neuroscience 2017-11, Vol.46 (9), p.2481-2498
Hauptverfasser: Cuppini, Cristiano, Shams, Ladan, Magosso, Elisa, Ursino, Mauro
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Recently, experimental and theoretical research has focused on the brain's abilities to extract information from a noisy sensory environment and how cross‐modal inputs are processed to solve the causal inference problem to provide the best estimate of external events. Despite the empirical evidence suggesting that the nervous system uses a statistically optimal and probabilistic approach in addressing these problems, little is known about the brain's architecture needed to implement these computations. The aim of this work was to realize a mathematical model, based on physiologically plausible hypotheses, to analyze the neural mechanisms underlying multisensory perception and causal inference. The model consists of three layers topologically organized: two encode auditory and visual stimuli, separately, and are reciprocally connected via excitatory synapses and send excitatory connections to the third downstream layer. This synaptic organization realizes two mechanisms of cross‐modal interactions: the first is responsible for the sensory representation of the external stimuli, while the second solves the causal inference problem. We tested the network by comparing its results to behavioral data reported in the literature. Among others, the network can account for the ventriloquism illusion, the pattern of sensory bias and the percept of unity as a function of the spatial auditory–visual distance, and the dependence of the auditory error on the causal inference. Finally, simulations results are consistent with probability matching as the perceptual strategy used in auditory–visual spatial localization tasks, agreeing with the behavioral data. The model makes untested predictions that can be investigated in future behavioral experiments. A neural network, based on physiologically plausible hypotheses, computes the position estimate of external multisensory stimuli and infers the causal structure of such events. It presents a hierarchical structure of layers topologically organized: Two unisensory layers (one auditory and one visual) are reciprocally connected via cross‐modal synapses and send feedforward synapses to a second layer, which solves the causal inference problem. The network produces results comparable with behavioral data in the literature, suggesting possible explanations about existing discrepancies: Among the others, it can account for the ventriloquism illusion, and the causal inference characteristics (i.e., a detection of one or two separat
ISSN:0953-816X
1460-9568
DOI:10.1111/ejn.13725