A neurocomputational analysis of the sound-induced flash illusion

Perception of the external world is based on the integration of inputs from different sensory modalities. Recent experimental findings suggest that this phenomenon is present in lower-level cortical areas at early processing stages. The mechanisms underlying these early processes and the organizatio...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:NeuroImage (Orlando, Fla.) Fla.), 2014-05, Vol.92, p.248-266
Hauptverfasser: Cuppini, Cristiano, Magosso, Elisa, Bolognini, Nadia, Vallar, Giuseppe, Ursino, Mauro
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Perception of the external world is based on the integration of inputs from different sensory modalities. Recent experimental findings suggest that this phenomenon is present in lower-level cortical areas at early processing stages. The mechanisms underlying these early processes and the organization of the underlying circuitries are still a matter of debate. Here, we investigate audiovisual interactions by means of a simple neural network consisting of two layers of visual and auditory neurons. We suggest that the spatial and temporal aspects of audio-visual illusions can be explained within this simple framework, based on two main assumptions: auditory and visual neurons communicate via excitatory synapses; and spatio-temporal receptive fields are different in the two modalities, auditory processing exhibiting a higher temporal resolution, while visual processing a higher spatial acuity. With these assumptions, the model is able: i) to simulate the sound-induced flash fission illusion; ii) to reproduce psychometric curves assuming a random variability in some parameters; iii) to account for other audio-visual illusions, such as the sound-induced flash fusion and the ventriloquism illusions; and iv) to predict that visual and auditory stimuli are combined optimally in multisensory integration. In sum, the proposed model provides a unifying summary of spatio-temporal audio-visual interactions, being able to both account for a wide set of empirical findings, and be a framework for future experiments. In perspective, it may be used to understand the neural basis of Bayesian audio-visual inference. •A neural network model of visual–auditory interaction is presented.•Parameters are set on physiological, behavioral and neuroimaging data in literature.•The model reproduces a variety of spatio-temporal audio-visual illusions.•The model proposes a unifying neural mechanism underlying audio-visual illusions.
ISSN:1053-8119
1095-9572
DOI:10.1016/j.neuroimage.2014.02.001