Cortical processing of distracting speech in noisy auditory scenes depends on perceptual demand
•Naturalistic three-speaker scenes and selective listening task.•Varied perceptual demand through manipulation of target segregation cue (ITD).•EEG signal modeling of cortical processing.•Stronger segregation between irrelevant speech under low perceptual demand.•Suggests that demand determines reso...
Gespeichert in:
Veröffentlicht in: | NeuroImage (Orlando, Fla.) Fla.), 2021-03, Vol.228, p.117670, Article 117670 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | •Naturalistic three-speaker scenes and selective listening task.•Varied perceptual demand through manipulation of target segregation cue (ITD).•EEG signal modeling of cortical processing.•Stronger segregation between irrelevant speech under low perceptual demand.•Suggests that demand determines resources for processing irrelevant sounds.
Selective attention is essential for the processing of multi-speaker auditory scenes because they require the perceptual segregation of the relevant speech (“target”) from irrelevant speech (“distractors”). For simple sounds, it has been suggested that the processing of multiple distractor sounds depends on bottom-up factors affecting task performance. However, it remains unclear whether such dependency applies to naturalistic multi-speaker auditory scenes.
In this study, we tested the hypothesis that increased perceptual demand (the processing requirement posed by the scene to separate the target speech) reduces the cortical processing of distractor speech thus decreasing their perceptual segregation. Human participants were presented with auditory scenes including three speakers and asked to selectively attend to one speaker while their EEG was acquired. The perceptual demand of this selective listening task was varied by introducing an auditory cue (interaural time differences, ITDs) for segregating the target from the distractor speakers, while acoustic differences between the distractors were matched in ITD and loudness. We obtained a quantitative measure of the cortical segregation of distractor speakers by assessing the difference in how accurately speech-envelope following EEG responses could be predicted by models of averaged distractor speech versus models of individual distractor speech. In agreement with our hypothesis, results show that interaural segregation cues led to improved behavioral word-recognition performance and stronger cortical segregation of the distractor speakers. The neural effect was strongest in the δ-band and at early delays (0 − 200 ms).
Our results indicate that during low perceptual demand, the human cortex represents individual distractor speech signals as more segregated. This suggests that, in addition to purely acoustical properties, the cortical processing of distractor speakers depends on factors like perceptual demand. |
---|---|
ISSN: | 1053-8119 1095-9572 |
DOI: | 10.1016/j.neuroimage.2020.117670 |