Cortical tracking of speakers' spectral changes predicts selective listening

A social scene is particularly informative when people are distinguishable. To understand somebody amid a "cocktail party" chatter, we automatically index their voice. This ability is underpinned by parallel processing of vocal spectral contours from speech sounds, but it has not yet been...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Cerebral cortex (New York, N.Y. 1991) N.Y. 1991), 2024-12, Vol.34 (12)
Hauptverfasser: Cervantes Constantino, Francisco, Caputi, Ángel
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue 12
container_start_page
container_title Cerebral cortex (New York, N.Y. 1991)
container_volume 34
creator Cervantes Constantino, Francisco
Caputi, Ángel
description A social scene is particularly informative when people are distinguishable. To understand somebody amid a "cocktail party" chatter, we automatically index their voice. This ability is underpinned by parallel processing of vocal spectral contours from speech sounds, but it has not yet been established how this occurs in the brain's cortex. We investigate single-trial neural tracking of slow frequency modulations in speech using electroencephalography. Participants briefly listened to unfamiliar single speakers, and in addition, they performed a cocktail party comprehension task. Quantified through stimulus reconstruction methods, robust tracking was found in neural responses to slow (delta-theta range) modulations of frequency contours in the fourth and fifth formant band, equivalent to the 3.5-5 KHz audible range. The spectral spacing between neighboring instantaneous frequency contours (ΔF), which also yields indexical information from the vocal tract, was similarly decodable. Moreover, EEG evidence of listeners' spectral tracking abilities predicted their chances of succeeding at selective listening when faced with two-speaker speech mixtures. In summary, the results indicate that the communicating brain can rely on locking of cortical rhythms to major changes led by upper resonances of the vocal tract. Their corresponding articulatory mechanics hence continuously issue a fundamental credential for listeners to target in real time.
doi_str_mv 10.1093/cercor/bhae472
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_3146846546</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3146846546</sourcerecordid><originalsourceid>FETCH-LOGICAL-c180t-823c2e39549427b65d2ad62e6df99e401dd729e83622134f070b10db276c19a63</originalsourceid><addsrcrecordid>eNo9kDtPwzAUhS0EolBYGZE3WNL6FTseUcVLqsQCs-XYN62pmwQ7ReLfk6qF6RzpPIYPoRtKZpRoPneQXJfm9dqCUOwEXVAhScGo1qejJ0IVnFE6QZc5fxJCFSvZOZpwLUsphb5Ay0WXhuBsxEOybhPaFe4anHuwG0j5bu_cmETs1rZdQcZ9Ah_ckHGGOEbhG3AMeYB2nF6hs8bGDNdHnaKPp8f3xUuxfHt-XTwsC0crMhQV444B16XQgqlalp5ZLxlI32gNglDvFdNQcckY5aIhitSU-Jop6ai2kk_R_eG3T93XDvJgtiE7iNG20O2y4SOESshS7KuzQ9WlLucEjelT2Nr0Yygxe4LmQNAcCY6D2-P3rt6C_6__IeO_3l9ugQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3146846546</pqid></control><display><type>article</type><title>Cortical tracking of speakers' spectral changes predicts selective listening</title><source>MEDLINE</source><source>Oxford University Press Journals All Titles (1996-Current)</source><creator>Cervantes Constantino, Francisco ; Caputi, Ángel</creator><creatorcontrib>Cervantes Constantino, Francisco ; Caputi, Ángel</creatorcontrib><description>A social scene is particularly informative when people are distinguishable. To understand somebody amid a "cocktail party" chatter, we automatically index their voice. This ability is underpinned by parallel processing of vocal spectral contours from speech sounds, but it has not yet been established how this occurs in the brain's cortex. We investigate single-trial neural tracking of slow frequency modulations in speech using electroencephalography. Participants briefly listened to unfamiliar single speakers, and in addition, they performed a cocktail party comprehension task. Quantified through stimulus reconstruction methods, robust tracking was found in neural responses to slow (delta-theta range) modulations of frequency contours in the fourth and fifth formant band, equivalent to the 3.5-5 KHz audible range. The spectral spacing between neighboring instantaneous frequency contours (ΔF), which also yields indexical information from the vocal tract, was similarly decodable. Moreover, EEG evidence of listeners' spectral tracking abilities predicted their chances of succeeding at selective listening when faced with two-speaker speech mixtures. In summary, the results indicate that the communicating brain can rely on locking of cortical rhythms to major changes led by upper resonances of the vocal tract. Their corresponding articulatory mechanics hence continuously issue a fundamental credential for listeners to target in real time.</description><identifier>ISSN: 1047-3211</identifier><identifier>ISSN: 1460-2199</identifier><identifier>EISSN: 1460-2199</identifier><identifier>DOI: 10.1093/cercor/bhae472</identifier><identifier>PMID: 39656649</identifier><language>eng</language><publisher>United States</publisher><subject>Acoustic Stimulation - methods ; Adult ; Cerebral Cortex - physiology ; Electroencephalography - methods ; Female ; Humans ; Male ; Speech Perception - physiology ; Young Adult</subject><ispartof>Cerebral cortex (New York, N.Y. 1991), 2024-12, Vol.34 (12)</ispartof><rights>The Author(s) 2024. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c180t-823c2e39549427b65d2ad62e6df99e401dd729e83622134f070b10db276c19a63</cites><orcidid>0000-0002-7393-3579</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,777,781,27905,27906</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/39656649$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Cervantes Constantino, Francisco</creatorcontrib><creatorcontrib>Caputi, Ángel</creatorcontrib><title>Cortical tracking of speakers' spectral changes predicts selective listening</title><title>Cerebral cortex (New York, N.Y. 1991)</title><addtitle>Cereb Cortex</addtitle><description>A social scene is particularly informative when people are distinguishable. To understand somebody amid a "cocktail party" chatter, we automatically index their voice. This ability is underpinned by parallel processing of vocal spectral contours from speech sounds, but it has not yet been established how this occurs in the brain's cortex. We investigate single-trial neural tracking of slow frequency modulations in speech using electroencephalography. Participants briefly listened to unfamiliar single speakers, and in addition, they performed a cocktail party comprehension task. Quantified through stimulus reconstruction methods, robust tracking was found in neural responses to slow (delta-theta range) modulations of frequency contours in the fourth and fifth formant band, equivalent to the 3.5-5 KHz audible range. The spectral spacing between neighboring instantaneous frequency contours (ΔF), which also yields indexical information from the vocal tract, was similarly decodable. Moreover, EEG evidence of listeners' spectral tracking abilities predicted their chances of succeeding at selective listening when faced with two-speaker speech mixtures. In summary, the results indicate that the communicating brain can rely on locking of cortical rhythms to major changes led by upper resonances of the vocal tract. Their corresponding articulatory mechanics hence continuously issue a fundamental credential for listeners to target in real time.</description><subject>Acoustic Stimulation - methods</subject><subject>Adult</subject><subject>Cerebral Cortex - physiology</subject><subject>Electroencephalography - methods</subject><subject>Female</subject><subject>Humans</subject><subject>Male</subject><subject>Speech Perception - physiology</subject><subject>Young Adult</subject><issn>1047-3211</issn><issn>1460-2199</issn><issn>1460-2199</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><recordid>eNo9kDtPwzAUhS0EolBYGZE3WNL6FTseUcVLqsQCs-XYN62pmwQ7ReLfk6qF6RzpPIYPoRtKZpRoPneQXJfm9dqCUOwEXVAhScGo1qejJ0IVnFE6QZc5fxJCFSvZOZpwLUsphb5Ay0WXhuBsxEOybhPaFe4anHuwG0j5bu_cmETs1rZdQcZ9Ah_ckHGGOEbhG3AMeYB2nF6hs8bGDNdHnaKPp8f3xUuxfHt-XTwsC0crMhQV444B16XQgqlalp5ZLxlI32gNglDvFdNQcckY5aIhitSU-Jop6ai2kk_R_eG3T93XDvJgtiE7iNG20O2y4SOESshS7KuzQ9WlLucEjelT2Nr0Yygxe4LmQNAcCY6D2-P3rt6C_6__IeO_3l9ugQ</recordid><startdate>20241203</startdate><enddate>20241203</enddate><creator>Cervantes Constantino, Francisco</creator><creator>Caputi, Ángel</creator><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-7393-3579</orcidid></search><sort><creationdate>20241203</creationdate><title>Cortical tracking of speakers' spectral changes predicts selective listening</title><author>Cervantes Constantino, Francisco ; Caputi, Ángel</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c180t-823c2e39549427b65d2ad62e6df99e401dd729e83622134f070b10db276c19a63</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Acoustic Stimulation - methods</topic><topic>Adult</topic><topic>Cerebral Cortex - physiology</topic><topic>Electroencephalography - methods</topic><topic>Female</topic><topic>Humans</topic><topic>Male</topic><topic>Speech Perception - physiology</topic><topic>Young Adult</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Cervantes Constantino, Francisco</creatorcontrib><creatorcontrib>Caputi, Ángel</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>Cerebral cortex (New York, N.Y. 1991)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Cervantes Constantino, Francisco</au><au>Caputi, Ángel</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Cortical tracking of speakers' spectral changes predicts selective listening</atitle><jtitle>Cerebral cortex (New York, N.Y. 1991)</jtitle><addtitle>Cereb Cortex</addtitle><date>2024-12-03</date><risdate>2024</risdate><volume>34</volume><issue>12</issue><issn>1047-3211</issn><issn>1460-2199</issn><eissn>1460-2199</eissn><abstract>A social scene is particularly informative when people are distinguishable. To understand somebody amid a "cocktail party" chatter, we automatically index their voice. This ability is underpinned by parallel processing of vocal spectral contours from speech sounds, but it has not yet been established how this occurs in the brain's cortex. We investigate single-trial neural tracking of slow frequency modulations in speech using electroencephalography. Participants briefly listened to unfamiliar single speakers, and in addition, they performed a cocktail party comprehension task. Quantified through stimulus reconstruction methods, robust tracking was found in neural responses to slow (delta-theta range) modulations of frequency contours in the fourth and fifth formant band, equivalent to the 3.5-5 KHz audible range. The spectral spacing between neighboring instantaneous frequency contours (ΔF), which also yields indexical information from the vocal tract, was similarly decodable. Moreover, EEG evidence of listeners' spectral tracking abilities predicted their chances of succeeding at selective listening when faced with two-speaker speech mixtures. In summary, the results indicate that the communicating brain can rely on locking of cortical rhythms to major changes led by upper resonances of the vocal tract. Their corresponding articulatory mechanics hence continuously issue a fundamental credential for listeners to target in real time.</abstract><cop>United States</cop><pmid>39656649</pmid><doi>10.1093/cercor/bhae472</doi><orcidid>https://orcid.org/0000-0002-7393-3579</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 1047-3211
ispartof Cerebral cortex (New York, N.Y. 1991), 2024-12, Vol.34 (12)
issn 1047-3211
1460-2199
1460-2199
language eng
recordid cdi_proquest_miscellaneous_3146846546
source MEDLINE; Oxford University Press Journals All Titles (1996-Current)
subjects Acoustic Stimulation - methods
Adult
Cerebral Cortex - physiology
Electroencephalography - methods
Female
Humans
Male
Speech Perception - physiology
Young Adult
title Cortical tracking of speakers' spectral changes predicts selective listening
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-20T13%3A03%3A09IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Cortical%20tracking%20of%20speakers'%20spectral%20changes%20predicts%20selective%20listening&rft.jtitle=Cerebral%20cortex%20(New%20York,%20N.Y.%201991)&rft.au=Cervantes%20Constantino,%20Francisco&rft.date=2024-12-03&rft.volume=34&rft.issue=12&rft.issn=1047-3211&rft.eissn=1460-2199&rft_id=info:doi/10.1093/cercor/bhae472&rft_dat=%3Cproquest_cross%3E3146846546%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3146846546&rft_id=info:pmid/39656649&rfr_iscdi=true