Envelope reconstruction of speech and music highlights stronger tracking of speech at low frequencies

The human brain tracks amplitude fluctuations of both speech and music, which reflects acoustic processing in addition to the encoding of higher-order features and one's cognitive state. Comparing neural tracking of speech and music envelopes can elucidate stimulus-general mechanisms, but direc...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:PLoS computational biology 2021-09, Vol.17 (9), p.e1009358-e1009358
Hauptverfasser: Zuk, Nathaniel J, Murphy, Jeremy W, Reilly, Richard B, Lalor, Edmund C
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page e1009358
container_issue 9
container_start_page e1009358
container_title PLoS computational biology
container_volume 17
creator Zuk, Nathaniel J
Murphy, Jeremy W
Reilly, Richard B
Lalor, Edmund C
description The human brain tracks amplitude fluctuations of both speech and music, which reflects acoustic processing in addition to the encoding of higher-order features and one's cognitive state. Comparing neural tracking of speech and music envelopes can elucidate stimulus-general mechanisms, but direct comparisons are confounded by differences in their envelope spectra. Here, we use a novel method of frequency-constrained reconstruction of stimulus envelopes using EEG recorded during passive listening. We expected to see music reconstruction match speech in a narrow range of frequencies, but instead we found that speech was reconstructed better than music for all frequencies we examined. Additionally, models trained on all stimulus types performed as well or better than the stimulus-specific models at higher modulation frequencies, suggesting a common neural mechanism for tracking speech and music. However, speech envelope tracking at low frequencies, below 1 Hz, was associated with increased weighting over parietal channels, which was not present for the other stimuli. Our results highlight the importance of low-frequency speech tracking and suggest an origin from speech-specific processing in the brain.
doi_str_mv 10.1371/journal.pcbi.1009358
format Article
fullrecord <record><control><sourceid>gale_plos_</sourceid><recordid>TN_cdi_plos_journals_2582586697</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A677502996</galeid><doaj_id>oai_doaj_org_article_5d03b1592d494788875b452e8c1514c1</doaj_id><sourcerecordid>A677502996</sourcerecordid><originalsourceid>FETCH-LOGICAL-c661t-9ce0da16cdcd2c9d4ab136cfa4015a4e7fee2ec2654fd166bfc8837fe17c7afc3</originalsourceid><addsrcrecordid>eNqVkk1v1DAQhiMEoqXwDxBE4gKHXeL4MxekqiqwUgUSH2fLmUyyXrL21k4K_Hu8bFptUC8oiTwaP_PGM36z7DkploRK8nbjx-BMv9xBbZekKCrK1YPslHBOFzLFD4_ik-xJjJuiSGElHmcnlHHKSkJOM7x0N9j7HeYBwbs4hBEG613u2zzuEGGdG9fk2zFayNe2W_fpG2KeQO86DPkQDPywrjsuGPLe_8zbgNcjOrAYn2aPWtNHfDatZ9n395ffLj4urj5_WF2cXy1ACDIsKsCiMURAA00JVcNMTaiA1rCCcMNQtoglQik4axsiRN2CUjRliQRpWqBn2cuD7q73UU8TirrkKr1CVDIRqwPReLPRu2C3JvzW3lj9N-FDp00YLPSoeVPQmvCqbFjFpFJK8prxEhUQThiQpPVu-ttYb7EBdGkY_Ux0vuPsWnf-RiumCsVpEng9CQSfRhUHvbURsO-NQz_uzy0ZTYdmKqGv_kHv726iOpMasK71--vZi-pzISUvyqoSiVreQ6Wnwa1NJsDWpvys4M2sIDED_ho6M8aoV1-__Af7ac6yAwvBxxiwvZsdKfTe5LdN6r3J9WTyVPbieO53Rbeupn8Alj75sA</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2582586697</pqid></control><display><type>article</type><title>Envelope reconstruction of speech and music highlights stronger tracking of speech at low frequencies</title><source>MEDLINE</source><source>PMC (PubMed Central)</source><source>PLoS_OA刊</source><source>DOAJ Directory of Open Access Journals</source><source>EZB Electronic Journals Library</source><creator>Zuk, Nathaniel J ; Murphy, Jeremy W ; Reilly, Richard B ; Lalor, Edmund C</creator><contributor>Theunissen, Frédéric E.</contributor><creatorcontrib>Zuk, Nathaniel J ; Murphy, Jeremy W ; Reilly, Richard B ; Lalor, Edmund C ; Theunissen, Frédéric E.</creatorcontrib><description>The human brain tracks amplitude fluctuations of both speech and music, which reflects acoustic processing in addition to the encoding of higher-order features and one's cognitive state. Comparing neural tracking of speech and music envelopes can elucidate stimulus-general mechanisms, but direct comparisons are confounded by differences in their envelope spectra. Here, we use a novel method of frequency-constrained reconstruction of stimulus envelopes using EEG recorded during passive listening. We expected to see music reconstruction match speech in a narrow range of frequencies, but instead we found that speech was reconstructed better than music for all frequencies we examined. Additionally, models trained on all stimulus types performed as well or better than the stimulus-specific models at higher modulation frequencies, suggesting a common neural mechanism for tracking speech and music. However, speech envelope tracking at low frequencies, below 1 Hz, was associated with increased weighting over parietal channels, which was not present for the other stimuli. Our results highlight the importance of low-frequency speech tracking and suggest an origin from speech-specific processing in the brain.</description><identifier>ISSN: 1553-7358</identifier><identifier>ISSN: 1553-734X</identifier><identifier>EISSN: 1553-7358</identifier><identifier>DOI: 10.1371/journal.pcbi.1009358</identifier><identifier>PMID: 34534211</identifier><language>eng</language><publisher>United States: Public Library of Science</publisher><subject>Acoustic Stimulation - methods ; Acoustics ; Adolescent ; Adult ; Audio frequency ; Audiobooks ; Auditory Perception - physiology ; Biology and Life Sciences ; Brain ; Brain - physiology ; Brain research ; Central auditory processing ; Cognitive ability ; Computational Biology ; Computer Simulation ; EEG ; Electroencephalography - statistics &amp; numerical data ; Engineering and Technology ; Envelopes ; Female ; Humans ; Linear Models ; Low frequencies ; Male ; Medicine and Health Sciences ; Models, Neurological ; Music ; Musical performances ; Physical Sciences ; Physiological aspects ; Principal Component Analysis ; Principal components analysis ; Psychological aspects ; Reconstruction ; Research and Analysis Methods ; Social Sciences ; Sound ; Speech ; Speech - physiology ; Speech Acoustics ; Speech Perception - physiology ; Tracking ; Young Adult</subject><ispartof>PLoS computational biology, 2021-09, Vol.17 (9), p.e1009358-e1009358</ispartof><rights>COPYRIGHT 2021 Public Library of Science</rights><rights>2021 Zuk et al. This is an open access article distributed under the terms of the Creative Commons Attribution License: http://creativecommons.org/licenses/by/4.0/ (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>2021 Zuk et al 2021 Zuk et al</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c661t-9ce0da16cdcd2c9d4ab136cfa4015a4e7fee2ec2654fd166bfc8837fe17c7afc3</citedby><cites>FETCH-LOGICAL-c661t-9ce0da16cdcd2c9d4ab136cfa4015a4e7fee2ec2654fd166bfc8837fe17c7afc3</cites><orcidid>0000-0002-2466-6718 ; 0000-0001-5595-8548</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC8480853/pdf/$$EPDF$$P50$$Gpubmedcentral$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC8480853/$$EHTML$$P50$$Gpubmedcentral$$Hfree_for_read</linktohtml><link.rule.ids>230,314,727,780,784,864,885,2100,2926,23864,27922,27923,53789,53791,79370,79371</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/34534211$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><contributor>Theunissen, Frédéric E.</contributor><creatorcontrib>Zuk, Nathaniel J</creatorcontrib><creatorcontrib>Murphy, Jeremy W</creatorcontrib><creatorcontrib>Reilly, Richard B</creatorcontrib><creatorcontrib>Lalor, Edmund C</creatorcontrib><title>Envelope reconstruction of speech and music highlights stronger tracking of speech at low frequencies</title><title>PLoS computational biology</title><addtitle>PLoS Comput Biol</addtitle><description>The human brain tracks amplitude fluctuations of both speech and music, which reflects acoustic processing in addition to the encoding of higher-order features and one's cognitive state. Comparing neural tracking of speech and music envelopes can elucidate stimulus-general mechanisms, but direct comparisons are confounded by differences in their envelope spectra. Here, we use a novel method of frequency-constrained reconstruction of stimulus envelopes using EEG recorded during passive listening. We expected to see music reconstruction match speech in a narrow range of frequencies, but instead we found that speech was reconstructed better than music for all frequencies we examined. Additionally, models trained on all stimulus types performed as well or better than the stimulus-specific models at higher modulation frequencies, suggesting a common neural mechanism for tracking speech and music. However, speech envelope tracking at low frequencies, below 1 Hz, was associated with increased weighting over parietal channels, which was not present for the other stimuli. Our results highlight the importance of low-frequency speech tracking and suggest an origin from speech-specific processing in the brain.</description><subject>Acoustic Stimulation - methods</subject><subject>Acoustics</subject><subject>Adolescent</subject><subject>Adult</subject><subject>Audio frequency</subject><subject>Audiobooks</subject><subject>Auditory Perception - physiology</subject><subject>Biology and Life Sciences</subject><subject>Brain</subject><subject>Brain - physiology</subject><subject>Brain research</subject><subject>Central auditory processing</subject><subject>Cognitive ability</subject><subject>Computational Biology</subject><subject>Computer Simulation</subject><subject>EEG</subject><subject>Electroencephalography - statistics &amp; numerical data</subject><subject>Engineering and Technology</subject><subject>Envelopes</subject><subject>Female</subject><subject>Humans</subject><subject>Linear Models</subject><subject>Low frequencies</subject><subject>Male</subject><subject>Medicine and Health Sciences</subject><subject>Models, Neurological</subject><subject>Music</subject><subject>Musical performances</subject><subject>Physical Sciences</subject><subject>Physiological aspects</subject><subject>Principal Component Analysis</subject><subject>Principal components analysis</subject><subject>Psychological aspects</subject><subject>Reconstruction</subject><subject>Research and Analysis Methods</subject><subject>Social Sciences</subject><subject>Sound</subject><subject>Speech</subject><subject>Speech - physiology</subject><subject>Speech Acoustics</subject><subject>Speech Perception - physiology</subject><subject>Tracking</subject><subject>Young Adult</subject><issn>1553-7358</issn><issn>1553-734X</issn><issn>1553-7358</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><sourceid>DOA</sourceid><recordid>eNqVkk1v1DAQhiMEoqXwDxBE4gKHXeL4MxekqiqwUgUSH2fLmUyyXrL21k4K_Hu8bFptUC8oiTwaP_PGM36z7DkploRK8nbjx-BMv9xBbZekKCrK1YPslHBOFzLFD4_ik-xJjJuiSGElHmcnlHHKSkJOM7x0N9j7HeYBwbs4hBEG613u2zzuEGGdG9fk2zFayNe2W_fpG2KeQO86DPkQDPywrjsuGPLe_8zbgNcjOrAYn2aPWtNHfDatZ9n395ffLj4urj5_WF2cXy1ACDIsKsCiMURAA00JVcNMTaiA1rCCcMNQtoglQik4axsiRN2CUjRliQRpWqBn2cuD7q73UU8TirrkKr1CVDIRqwPReLPRu2C3JvzW3lj9N-FDp00YLPSoeVPQmvCqbFjFpFJK8prxEhUQThiQpPVu-ttYb7EBdGkY_Ux0vuPsWnf-RiumCsVpEng9CQSfRhUHvbURsO-NQz_uzy0ZTYdmKqGv_kHv726iOpMasK71--vZi-pzISUvyqoSiVreQ6Wnwa1NJsDWpvys4M2sIDED_ho6M8aoV1-__Af7ac6yAwvBxxiwvZsdKfTe5LdN6r3J9WTyVPbieO53Rbeupn8Alj75sA</recordid><startdate>20210901</startdate><enddate>20210901</enddate><creator>Zuk, Nathaniel J</creator><creator>Murphy, Jeremy W</creator><creator>Reilly, Richard B</creator><creator>Lalor, Edmund C</creator><general>Public Library of Science</general><general>Public Library of Science (PLoS)</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>ISN</scope><scope>ISR</scope><scope>3V.</scope><scope>7QO</scope><scope>7QP</scope><scope>7TK</scope><scope>7TM</scope><scope>7X7</scope><scope>7XB</scope><scope>88E</scope><scope>8AL</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FH</scope><scope>8FI</scope><scope>8FJ</scope><scope>8FK</scope><scope>ABUWG</scope><scope>AEUYN</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BBNVY</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>BHPHI</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FR3</scope><scope>FYUFA</scope><scope>GHDGH</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>K9.</scope><scope>LK8</scope><scope>M0N</scope><scope>M0S</scope><scope>M1P</scope><scope>M7P</scope><scope>P5Z</scope><scope>P62</scope><scope>P64</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>Q9U</scope><scope>RC3</scope><scope>7X8</scope><scope>5PM</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0002-2466-6718</orcidid><orcidid>https://orcid.org/0000-0001-5595-8548</orcidid></search><sort><creationdate>20210901</creationdate><title>Envelope reconstruction of speech and music highlights stronger tracking of speech at low frequencies</title><author>Zuk, Nathaniel J ; Murphy, Jeremy W ; Reilly, Richard B ; Lalor, Edmund C</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c661t-9ce0da16cdcd2c9d4ab136cfa4015a4e7fee2ec2654fd166bfc8837fe17c7afc3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Acoustic Stimulation - methods</topic><topic>Acoustics</topic><topic>Adolescent</topic><topic>Adult</topic><topic>Audio frequency</topic><topic>Audiobooks</topic><topic>Auditory Perception - physiology</topic><topic>Biology and Life Sciences</topic><topic>Brain</topic><topic>Brain - physiology</topic><topic>Brain research</topic><topic>Central auditory processing</topic><topic>Cognitive ability</topic><topic>Computational Biology</topic><topic>Computer Simulation</topic><topic>EEG</topic><topic>Electroencephalography - statistics &amp; numerical data</topic><topic>Engineering and Technology</topic><topic>Envelopes</topic><topic>Female</topic><topic>Humans</topic><topic>Linear Models</topic><topic>Low frequencies</topic><topic>Male</topic><topic>Medicine and Health Sciences</topic><topic>Models, Neurological</topic><topic>Music</topic><topic>Musical performances</topic><topic>Physical Sciences</topic><topic>Physiological aspects</topic><topic>Principal Component Analysis</topic><topic>Principal components analysis</topic><topic>Psychological aspects</topic><topic>Reconstruction</topic><topic>Research and Analysis Methods</topic><topic>Social Sciences</topic><topic>Sound</topic><topic>Speech</topic><topic>Speech - physiology</topic><topic>Speech Acoustics</topic><topic>Speech Perception - physiology</topic><topic>Tracking</topic><topic>Young Adult</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zuk, Nathaniel J</creatorcontrib><creatorcontrib>Murphy, Jeremy W</creatorcontrib><creatorcontrib>Reilly, Richard B</creatorcontrib><creatorcontrib>Lalor, Edmund C</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Gale In Context: Canada</collection><collection>Gale In Context: Science</collection><collection>ProQuest Central (Corporate)</collection><collection>Biotechnology Research Abstracts</collection><collection>Calcium &amp; Calcified Tissue Abstracts</collection><collection>Neurosciences Abstracts</collection><collection>Nucleic Acids Abstracts</collection><collection>ProQuest_Health &amp; Medical Collection</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Medical Database (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Natural Science Collection</collection><collection>Hospital Premium Collection</collection><collection>Hospital Premium Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest One Sustainability</collection><collection>ProQuest Central</collection><collection>Advanced Technologies &amp; Aerospace Database‎ (1962 - current)</collection><collection>ProQuest Central Essentials</collection><collection>Biological Science Collection</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest Natural Science Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>Engineering Research Database</collection><collection>Health Research Premium Collection</collection><collection>Health Research Premium Collection (Alumni)</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>ProQuest Health &amp; Medical Complete (Alumni)</collection><collection>ProQuest Biological Science Collection</collection><collection>Computing Database</collection><collection>Health &amp; Medical Collection (Alumni Edition)</collection><collection>PML(ProQuest Medical Library)</collection><collection>ProQuest Biological Science Journals</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>Publicly Available Content (ProQuest)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ProQuest Central Basic</collection><collection>Genetics Abstracts</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>PLoS computational biology</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zuk, Nathaniel J</au><au>Murphy, Jeremy W</au><au>Reilly, Richard B</au><au>Lalor, Edmund C</au><au>Theunissen, Frédéric E.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Envelope reconstruction of speech and music highlights stronger tracking of speech at low frequencies</atitle><jtitle>PLoS computational biology</jtitle><addtitle>PLoS Comput Biol</addtitle><date>2021-09-01</date><risdate>2021</risdate><volume>17</volume><issue>9</issue><spage>e1009358</spage><epage>e1009358</epage><pages>e1009358-e1009358</pages><issn>1553-7358</issn><issn>1553-734X</issn><eissn>1553-7358</eissn><abstract>The human brain tracks amplitude fluctuations of both speech and music, which reflects acoustic processing in addition to the encoding of higher-order features and one's cognitive state. Comparing neural tracking of speech and music envelopes can elucidate stimulus-general mechanisms, but direct comparisons are confounded by differences in their envelope spectra. Here, we use a novel method of frequency-constrained reconstruction of stimulus envelopes using EEG recorded during passive listening. We expected to see music reconstruction match speech in a narrow range of frequencies, but instead we found that speech was reconstructed better than music for all frequencies we examined. Additionally, models trained on all stimulus types performed as well or better than the stimulus-specific models at higher modulation frequencies, suggesting a common neural mechanism for tracking speech and music. However, speech envelope tracking at low frequencies, below 1 Hz, was associated with increased weighting over parietal channels, which was not present for the other stimuli. Our results highlight the importance of low-frequency speech tracking and suggest an origin from speech-specific processing in the brain.</abstract><cop>United States</cop><pub>Public Library of Science</pub><pmid>34534211</pmid><doi>10.1371/journal.pcbi.1009358</doi><orcidid>https://orcid.org/0000-0002-2466-6718</orcidid><orcidid>https://orcid.org/0000-0001-5595-8548</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1553-7358
ispartof PLoS computational biology, 2021-09, Vol.17 (9), p.e1009358-e1009358
issn 1553-7358
1553-734X
1553-7358
language eng
recordid cdi_plos_journals_2582586697
source MEDLINE; PMC (PubMed Central); PLoS_OA刊; DOAJ Directory of Open Access Journals; EZB Electronic Journals Library
subjects Acoustic Stimulation - methods
Acoustics
Adolescent
Adult
Audio frequency
Audiobooks
Auditory Perception - physiology
Biology and Life Sciences
Brain
Brain - physiology
Brain research
Central auditory processing
Cognitive ability
Computational Biology
Computer Simulation
EEG
Electroencephalography - statistics & numerical data
Engineering and Technology
Envelopes
Female
Humans
Linear Models
Low frequencies
Male
Medicine and Health Sciences
Models, Neurological
Music
Musical performances
Physical Sciences
Physiological aspects
Principal Component Analysis
Principal components analysis
Psychological aspects
Reconstruction
Research and Analysis Methods
Social Sciences
Sound
Speech
Speech - physiology
Speech Acoustics
Speech Perception - physiology
Tracking
Young Adult
title Envelope reconstruction of speech and music highlights stronger tracking of speech at low frequencies
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-14T16%3A28%3A00IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_plos_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Envelope%20reconstruction%20of%20speech%20and%20music%20highlights%20stronger%20tracking%20of%20speech%20at%20low%20frequencies&rft.jtitle=PLoS%20computational%20biology&rft.au=Zuk,%20Nathaniel%20J&rft.date=2021-09-01&rft.volume=17&rft.issue=9&rft.spage=e1009358&rft.epage=e1009358&rft.pages=e1009358-e1009358&rft.issn=1553-7358&rft.eissn=1553-7358&rft_id=info:doi/10.1371/journal.pcbi.1009358&rft_dat=%3Cgale_plos_%3EA677502996%3C/gale_plos_%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2582586697&rft_id=info:pmid/34534211&rft_galeid=A677502996&rft_doaj_id=oai_doaj_org_article_5d03b1592d494788875b452e8c1514c1&rfr_iscdi=true