Low-Frequency Cortical Entrainment to Speech Reflects Phoneme-Level Processing
The human ability to understand speech is underpinned by a hierarchical auditory system whose successive stages process increasingly complex attributes of the acoustic input. It has been suggested that to produce categorical speech perception, this system must elicit consistent neural responses to s...
Gespeichert in:
Veröffentlicht in: | Current biology 2015-10, Vol.25 (19), p.2457-2465 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The human ability to understand speech is underpinned by a hierarchical auditory system whose successive stages process increasingly complex attributes of the acoustic input. It has been suggested that to produce categorical speech perception, this system must elicit consistent neural responses to speech tokens (e.g., phonemes) despite variations in their acoustics. Here, using electroencephalography (EEG), we provide evidence for this categorical phoneme-level speech processing by showing that the relationship between continuous speech and neural activity is best described when that speech is represented using both low-level spectrotemporal information and categorical labeling of phonetic features. Furthermore, the mapping between phonemes and EEG becomes more discriminative for phonetic features at longer latencies, in line with what one might expect from a hierarchical system. Importantly, these effects are not seen for time-reversed speech. These findings may form the basis for future research on natural language processing in specific cohorts of interest and for broader insights into how brains transform acoustic input into meaning.
•EEG reflects categorical processing of phonemes within continuous speech•EEG is best modeled when representing speech as acoustic signal plus phoneme labels•Neural delta and theta bands reflect this speech-specific cortical activity•Specific speech articulatory features are discriminable in EEG responses
Di Liberto et al. show that EEG responses to natural speech are best modeled when that speech is represented in terms of its low-level acoustics plus a categorical labeling of phonetic features. This suggests that EEG reflects categorical phoneme-level speech processing and provides a new framework for studying such processing. |
---|---|
ISSN: | 0960-9822 1879-0445 |
DOI: | 10.1016/j.cub.2015.08.030 |