Visemic processing in audiovisual discrimination of natural speech: A simultaneous fMRI–EEG study

► Simultaneous fMRI/EEG were recorded during audiovisual speech discrimination tasks. ► We compared audiovisual speech discrimination with and without articulatory movements. ► Visual perception of speech articulation helps discrimination of phonetic features. ► Phonological discrimination with dyna...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Neuropsychologia 2012-06, Vol.50 (7), p.1316-1326
Hauptverfasser: Dubois, Cyril, Otzenberger, Hélène, Gounot, Daniel, Sock, Rudolph, Metz-Lutz, Marie-Noëlle
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:► Simultaneous fMRI/EEG were recorded during audiovisual speech discrimination tasks. ► We compared audiovisual speech discrimination with and without articulatory movements. ► Visual perception of speech articulation helps discrimination of phonetic features. ► Phonological discrimination with dynamic cues activates the premotor cortex. ► Visual dynamic cues contribute to speech discrimination as early as 150ms. In a noisy environment, visual perception of articulatory movements improves natural speech intelligibility. Parallel to phonemic processing based on auditory signal, visemic processing constitutes a counterpart based on “visemes”, the distinctive visual units of speech. Aiming at investigating the neural substrates of visemic processing in a disturbed environment, we carried out a simultaneous fMRI–EEG experiment based on discriminating syllabic minimal pairs involving three phonological contrasts, each bearing on a single phonetic feature characterised by different degrees of visual distinctiveness. The contrasts involved either labialisation of the vowels, or place of articulation or voicing of the consonants. Audiovisual consonant–vowel syllable pairs were presented either with a static facial configuration or with a dynamic display of articulatory movements related to speech production. In the sound-disturbed MRI environment, the significant improvement of syllabic discrimination achieved in the dynamic audiovisual modality, compared to the static audiovisual modality was associated with activation of the occipito-temporal cortex (MT+V5) bilaterally, and of the left premotor cortex. While the former was activated in response to facial movements independently of their relation to speech, the latter was specifically activated by phonological discrimination. During fMRI, significant evoked potential responses to syllabic discrimination were recorded around 150 and 250ms following the onset of the second stimulus of the pairs, whose amplitude was greater in the dynamic compared to the static audiovisual modality. Our results provide arguments for the involvement of the speech motor cortex in phonological discrimination, and suggest a multimodal representation of speech units.
ISSN:0028-3932
1873-3514
DOI:10.1016/j.neuropsychologia.2012.02.016