A visual or tactile signal makes auditory speech detection more efficient by reducing uncertainty

Acoustic speech is easier to detect in noise when the talker can be seen. This finding could be explained by integration of multisensory inputs or refinement of auditory processing from visual guidance. In two experiments, we studied two‐interval forced‐choice detection of an auditory ‘ba’ in acoust...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:The European journal of neuroscience 2014-04, Vol.39 (8), p.1323-1331
Hauptverfasser: Tjan, Bosco S., Chao, Ewen, Bernstein, Lynne E.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Acoustic speech is easier to detect in noise when the talker can be seen. This finding could be explained by integration of multisensory inputs or refinement of auditory processing from visual guidance. In two experiments, we studied two‐interval forced‐choice detection of an auditory ‘ba’ in acoustic noise, paired with various visual and tactile stimuli that were identically presented in the two observation intervals. Detection thresholds were reduced under the multisensory conditions vs. the auditory‐only condition, even though the visual and/or tactile stimuli alone could not inform the correct response. Results were analysed relative to an ideal observer for which intrinsic (internal) noise and efficiency were independent contributors to detection sensitivity. Across experiments, intrinsic noise was unaffected by the multisensory stimuli, arguing against the merging (integrating) of multisensory inputs into a unitary speech signal, but sampling efficiency was increased to varying degrees, supporting refinement of knowledge about the auditory stimulus. The steepness of the psychometric functions decreased with increasing sampling efficiency, suggesting that the ‘task‐irrelevant’ visual and tactile stimuli reduced uncertainty about the acoustic signal. Visible speech was not superior for enhancing auditory speech detection. Our results reject multisensory neuronal integration and speech‐specific neural processing as explanations for the enhanced auditory speech detection under noisy conditions. Instead, they support a more rudimentary form of multisensory interaction: the otherwise task‐irrelevant sensory systems inform the auditory system about when to listen. When acoustic speech is buried in noise, a task‐irrelevant visual and/or vibrotactile stimulus can enhance its detectability. Within an ideal observer model, enhancement is attributable to reduced noise intrinsic to the perceptual system and/or improved statistical sampling efficiency. Experiments here support only improved efficiency via uncertainty reduction and offer no evidence for change in internal noise. This pattern of results argues against enhancement due to multisensory integration.
ISSN:0953-816X
1460-9568
DOI:10.1111/ejn.12471