Neural speech restoration at the cocktail party: Auditory cortex recovers masked speech of both attended and ignored speakers
Humans are remarkably skilled at listening to one speaker out of an acoustic mixture of several speech sources. Two speakers are easily segregated, even without binaural cues, but the neural mechanisms underlying this ability are not well understood. One possibility is that early cortical processing...
Gespeichert in:
Veröffentlicht in: | PLoS biology 2020-10, Vol.18 (10), p.e3000883-e3000883 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | e3000883 |
---|---|
container_issue | 10 |
container_start_page | e3000883 |
container_title | PLoS biology |
container_volume | 18 |
creator | Brodbeck, Christian Jiao, Alex Hong, L Elliot Simon, Jonathan Z |
description | Humans are remarkably skilled at listening to one speaker out of an acoustic mixture of several speech sources. Two speakers are easily segregated, even without binaural cues, but the neural mechanisms underlying this ability are not well understood. One possibility is that early cortical processing performs a spectrotemporal decomposition of the acoustic mixture, allowing the attended speech to be reconstructed via optimally weighted recombinations that discount spectrotemporal regions where sources heavily overlap. Using human magnetoencephalography (MEG) responses to a 2-talker mixture, we show evidence for an alternative possibility, in which early, active segregation occurs even for strongly spectrotemporally overlapping regions. Early (approximately 70-millisecond) responses to nonoverlapping spectrotemporal features are seen for both talkers. When competing talkers' spectrotemporal features mask each other, the individual representations persist, but they occur with an approximately 20-millisecond delay. This suggests that the auditory cortex recovers acoustic features that are masked in the mixture, even if they occurred in the ignored speech. The existence of such noise-robust cortical representations, of features present in attended as well as ignored speech, suggests an active cortical stream segregation process, which could explain a range of behavioral effects of ignored background speech. |
doi_str_mv | 10.1371/journal.pbio.3000883 |
format | Article |
fullrecord | <record><control><sourceid>gale_plos_</sourceid><recordid>TN_cdi_plos_journals_2460095511</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A645323295</galeid><doaj_id>oai_doaj_org_article_c1f2f771b5cd4a0e8b19a25747d4ebc8</doaj_id><sourcerecordid>A645323295</sourcerecordid><originalsourceid>FETCH-LOGICAL-c746t-3ca92bfed27224df533d6936921b1d49d5985d8f9651bc0b51a5c6449cbea58d3</originalsourceid><addsrcrecordid>eNqVk8tuEzEUhkcIREvhDRCMxAYWCb7O2F0gRRWXSFUrcdtavk3iZDIOtqciC94dD5lUDeoC5MVYx9__H5_jOUXxHIIpxDV8u_J96GQ73SrnpxgAwBh-UJxCSuikZow-vLM_KZ7EuAIAIY7Y4-IEY8AhAPi0-HVl-yDbMm6t1csy2Jh8kMn5rpSpTEtbaq_XSbq23MqQduflrDcuM7t8EJL9mSXa39gQy42Ma2sOTr4plU_L7JJsZ3JcdqZ0i86HPSPXWfO0eNTINtpn4_es-Pbh_deLT5PL64_zi9nlRNekShOsJUeqsQbVCBHTUIxNxXHFEVTQEG4oZ9SwhlcUKg0UhZLqihCulZWUGXxWvNz7blsfxdi5KBCpAOCUQpiJ-Z4wXq7ENriNDDvhpRN_Aj4sRC7f6dYKDRvU1DVUVBsigWUKcoloTWpDrNIse70bs_VqY422XcotPjI9PuncUiz8jajznQGj2eD1aBD8jz4_idi4qG3bys76frg3JRBVFR1yvfoLvb-6kVrIXIDrGp_z6sFUzCpCMcKID2mn91B5Gbtx2ne2cTl-JHhzJMhM_iPSQvYxivmXz__BXv07e_39mCV7VgcfY7DNbZ8hEMOgHBoihkER46Bk2Yu7b3QrOkwG_g3FEQ9e</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2460095511</pqid></control><display><type>article</type><title>Neural speech restoration at the cocktail party: Auditory cortex recovers masked speech of both attended and ignored speakers</title><source>MEDLINE</source><source>DOAJ Directory of Open Access Journals</source><source>Public Library of Science (PLoS) Journals Open Access</source><source>EZB-FREE-00999 freely available EZB journals</source><source>PubMed Central</source><creator>Brodbeck, Christian ; Jiao, Alex ; Hong, L Elliot ; Simon, Jonathan Z</creator><contributor>Malmierca, Manuel S.</contributor><creatorcontrib>Brodbeck, Christian ; Jiao, Alex ; Hong, L Elliot ; Simon, Jonathan Z ; Malmierca, Manuel S.</creatorcontrib><description>Humans are remarkably skilled at listening to one speaker out of an acoustic mixture of several speech sources. Two speakers are easily segregated, even without binaural cues, but the neural mechanisms underlying this ability are not well understood. One possibility is that early cortical processing performs a spectrotemporal decomposition of the acoustic mixture, allowing the attended speech to be reconstructed via optimally weighted recombinations that discount spectrotemporal regions where sources heavily overlap. Using human magnetoencephalography (MEG) responses to a 2-talker mixture, we show evidence for an alternative possibility, in which early, active segregation occurs even for strongly spectrotemporally overlapping regions. Early (approximately 70-millisecond) responses to nonoverlapping spectrotemporal features are seen for both talkers. When competing talkers' spectrotemporal features mask each other, the individual representations persist, but they occur with an approximately 20-millisecond delay. This suggests that the auditory cortex recovers acoustic features that are masked in the mixture, even if they occurred in the ignored speech. The existence of such noise-robust cortical representations, of features present in attended as well as ignored speech, suggests an active cortical stream segregation process, which could explain a range of behavioral effects of ignored background speech.</description><identifier>ISSN: 1545-7885</identifier><identifier>ISSN: 1544-9173</identifier><identifier>EISSN: 1545-7885</identifier><identifier>DOI: 10.1371/journal.pbio.3000883</identifier><identifier>PMID: 33091003</identifier><language>eng</language><publisher>United States: Public Library of Science</publisher><subject>Acoustic noise ; Acoustic Stimulation ; Acoustics ; Adult ; Attention - physiology ; Auditory cortex ; Auditory Cortex - physiology ; Auditory masking ; Biology and Life Sciences ; Brain research ; Computer engineering ; Cortex (auditory) ; Engineering and Technology ; Female ; Humans ; Hypotheses ; Magnetoencephalography ; Male ; Medicine and Health Sciences ; Middle Aged ; Models, Biological ; Physical Sciences ; Physiological aspects ; Psychological research ; Representations ; Research and Analysis Methods ; Segregation process ; Social Sciences ; Speech ; Speech - physiology ; Speech perception ; Time Factors ; Young Adult</subject><ispartof>PLoS biology, 2020-10, Vol.18 (10), p.e3000883-e3000883</ispartof><rights>COPYRIGHT 2020 Public Library of Science</rights><rights>2020 Brodbeck et al. This is an open access article distributed under the terms of the Creative Commons Attribution License: http://creativecommons.org/licenses/by/4.0/ (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>2020 Brodbeck et al 2020 Brodbeck et al</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c746t-3ca92bfed27224df533d6936921b1d49d5985d8f9651bc0b51a5c6449cbea58d3</citedby><cites>FETCH-LOGICAL-c746t-3ca92bfed27224df533d6936921b1d49d5985d8f9651bc0b51a5c6449cbea58d3</cites><orcidid>0000-0003-0858-0698 ; 0000-0001-8380-639X ; 0000-0002-1472-2659</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC7644085/pdf/$$EPDF$$P50$$Gpubmedcentral$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC7644085/$$EHTML$$P50$$Gpubmedcentral$$Hfree_for_read</linktohtml><link.rule.ids>230,314,727,780,784,864,885,2102,2928,23866,27924,27925,53791,53793</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/33091003$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><contributor>Malmierca, Manuel S.</contributor><creatorcontrib>Brodbeck, Christian</creatorcontrib><creatorcontrib>Jiao, Alex</creatorcontrib><creatorcontrib>Hong, L Elliot</creatorcontrib><creatorcontrib>Simon, Jonathan Z</creatorcontrib><title>Neural speech restoration at the cocktail party: Auditory cortex recovers masked speech of both attended and ignored speakers</title><title>PLoS biology</title><addtitle>PLoS Biol</addtitle><description>Humans are remarkably skilled at listening to one speaker out of an acoustic mixture of several speech sources. Two speakers are easily segregated, even without binaural cues, but the neural mechanisms underlying this ability are not well understood. One possibility is that early cortical processing performs a spectrotemporal decomposition of the acoustic mixture, allowing the attended speech to be reconstructed via optimally weighted recombinations that discount spectrotemporal regions where sources heavily overlap. Using human magnetoencephalography (MEG) responses to a 2-talker mixture, we show evidence for an alternative possibility, in which early, active segregation occurs even for strongly spectrotemporally overlapping regions. Early (approximately 70-millisecond) responses to nonoverlapping spectrotemporal features are seen for both talkers. When competing talkers' spectrotemporal features mask each other, the individual representations persist, but they occur with an approximately 20-millisecond delay. This suggests that the auditory cortex recovers acoustic features that are masked in the mixture, even if they occurred in the ignored speech. The existence of such noise-robust cortical representations, of features present in attended as well as ignored speech, suggests an active cortical stream segregation process, which could explain a range of behavioral effects of ignored background speech.</description><subject>Acoustic noise</subject><subject>Acoustic Stimulation</subject><subject>Acoustics</subject><subject>Adult</subject><subject>Attention - physiology</subject><subject>Auditory cortex</subject><subject>Auditory Cortex - physiology</subject><subject>Auditory masking</subject><subject>Biology and Life Sciences</subject><subject>Brain research</subject><subject>Computer engineering</subject><subject>Cortex (auditory)</subject><subject>Engineering and Technology</subject><subject>Female</subject><subject>Humans</subject><subject>Hypotheses</subject><subject>Magnetoencephalography</subject><subject>Male</subject><subject>Medicine and Health Sciences</subject><subject>Middle Aged</subject><subject>Models, Biological</subject><subject>Physical Sciences</subject><subject>Physiological aspects</subject><subject>Psychological research</subject><subject>Representations</subject><subject>Research and Analysis Methods</subject><subject>Segregation process</subject><subject>Social Sciences</subject><subject>Speech</subject><subject>Speech - physiology</subject><subject>Speech perception</subject><subject>Time Factors</subject><subject>Young Adult</subject><issn>1545-7885</issn><issn>1544-9173</issn><issn>1545-7885</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><sourceid>DOA</sourceid><recordid>eNqVk8tuEzEUhkcIREvhDRCMxAYWCb7O2F0gRRWXSFUrcdtavk3iZDIOtqciC94dD5lUDeoC5MVYx9__H5_jOUXxHIIpxDV8u_J96GQ73SrnpxgAwBh-UJxCSuikZow-vLM_KZ7EuAIAIY7Y4-IEY8AhAPi0-HVl-yDbMm6t1csy2Jh8kMn5rpSpTEtbaq_XSbq23MqQduflrDcuM7t8EJL9mSXa39gQy42Ma2sOTr4plU_L7JJsZ3JcdqZ0i86HPSPXWfO0eNTINtpn4_es-Pbh_deLT5PL64_zi9nlRNekShOsJUeqsQbVCBHTUIxNxXHFEVTQEG4oZ9SwhlcUKg0UhZLqihCulZWUGXxWvNz7blsfxdi5KBCpAOCUQpiJ-Z4wXq7ENriNDDvhpRN_Aj4sRC7f6dYKDRvU1DVUVBsigWUKcoloTWpDrNIse70bs_VqY422XcotPjI9PuncUiz8jajznQGj2eD1aBD8jz4_idi4qG3bys76frg3JRBVFR1yvfoLvb-6kVrIXIDrGp_z6sFUzCpCMcKID2mn91B5Gbtx2ne2cTl-JHhzJMhM_iPSQvYxivmXz__BXv07e_39mCV7VgcfY7DNbZ8hEMOgHBoihkER46Bk2Yu7b3QrOkwG_g3FEQ9e</recordid><startdate>20201022</startdate><enddate>20201022</enddate><creator>Brodbeck, Christian</creator><creator>Jiao, Alex</creator><creator>Hong, L Elliot</creator><creator>Simon, Jonathan Z</creator><general>Public Library of Science</general><general>Public Library of Science (PLoS)</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>IOV</scope><scope>ISN</scope><scope>ISR</scope><scope>3V.</scope><scope>7QG</scope><scope>7QL</scope><scope>7SN</scope><scope>7SS</scope><scope>7T5</scope><scope>7TK</scope><scope>7TM</scope><scope>7X7</scope><scope>7XB</scope><scope>88E</scope><scope>8FD</scope><scope>8FE</scope><scope>8FH</scope><scope>8FI</scope><scope>8FJ</scope><scope>8FK</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ATCPS</scope><scope>AZQEC</scope><scope>BBNVY</scope><scope>BENPR</scope><scope>BHPHI</scope><scope>C1K</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FR3</scope><scope>FYUFA</scope><scope>GHDGH</scope><scope>GNUQQ</scope><scope>H94</scope><scope>HCIFZ</scope><scope>K9.</scope><scope>LK8</scope><scope>M0S</scope><scope>M1P</scope><scope>M7N</scope><scope>M7P</scope><scope>P64</scope><scope>PATMY</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PYCSY</scope><scope>RC3</scope><scope>7X8</scope><scope>5PM</scope><scope>DOA</scope><scope>CZG</scope><orcidid>https://orcid.org/0000-0003-0858-0698</orcidid><orcidid>https://orcid.org/0000-0001-8380-639X</orcidid><orcidid>https://orcid.org/0000-0002-1472-2659</orcidid></search><sort><creationdate>20201022</creationdate><title>Neural speech restoration at the cocktail party: Auditory cortex recovers masked speech of both attended and ignored speakers</title><author>Brodbeck, Christian ; Jiao, Alex ; Hong, L Elliot ; Simon, Jonathan Z</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c746t-3ca92bfed27224df533d6936921b1d49d5985d8f9651bc0b51a5c6449cbea58d3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Acoustic noise</topic><topic>Acoustic Stimulation</topic><topic>Acoustics</topic><topic>Adult</topic><topic>Attention - physiology</topic><topic>Auditory cortex</topic><topic>Auditory Cortex - physiology</topic><topic>Auditory masking</topic><topic>Biology and Life Sciences</topic><topic>Brain research</topic><topic>Computer engineering</topic><topic>Cortex (auditory)</topic><topic>Engineering and Technology</topic><topic>Female</topic><topic>Humans</topic><topic>Hypotheses</topic><topic>Magnetoencephalography</topic><topic>Male</topic><topic>Medicine and Health Sciences</topic><topic>Middle Aged</topic><topic>Models, Biological</topic><topic>Physical Sciences</topic><topic>Physiological aspects</topic><topic>Psychological research</topic><topic>Representations</topic><topic>Research and Analysis Methods</topic><topic>Segregation process</topic><topic>Social Sciences</topic><topic>Speech</topic><topic>Speech - physiology</topic><topic>Speech perception</topic><topic>Time Factors</topic><topic>Young Adult</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Brodbeck, Christian</creatorcontrib><creatorcontrib>Jiao, Alex</creatorcontrib><creatorcontrib>Hong, L Elliot</creatorcontrib><creatorcontrib>Simon, Jonathan Z</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Opposing Viewpoints in Context (Gale)</collection><collection>Gale In Context: Canada</collection><collection>Gale In Context: Science</collection><collection>ProQuest Central (Corporate)</collection><collection>Animal Behavior Abstracts</collection><collection>Bacteriology Abstracts (Microbiology B)</collection><collection>Ecology Abstracts</collection><collection>Entomology Abstracts (Full archive)</collection><collection>Immunology Abstracts</collection><collection>Neurosciences Abstracts</collection><collection>Nucleic Acids Abstracts</collection><collection>Health & Medical Collection</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Medical Database (Alumni Edition)</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Natural Science Collection</collection><collection>Hospital Premium Collection</collection><collection>Hospital Premium Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Agricultural & Environmental Science Collection</collection><collection>ProQuest Central Essentials</collection><collection>Biological Science Collection</collection><collection>ProQuest Central</collection><collection>Natural Science Collection</collection><collection>Environmental Sciences and Pollution Management</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Engineering Research Database</collection><collection>Health Research Premium Collection</collection><collection>Health Research Premium Collection (Alumni)</collection><collection>ProQuest Central Student</collection><collection>AIDS and Cancer Research Abstracts</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Health & Medical Complete (Alumni)</collection><collection>ProQuest Biological Science Collection</collection><collection>Health & Medical Collection (Alumni Edition)</collection><collection>Medical Database</collection><collection>Algology Mycology and Protozoology Abstracts (Microbiology C)</collection><collection>Biological Science Database</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>Environmental Science Database</collection><collection>Access via ProQuest (Open Access)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Environmental Science Collection</collection><collection>Genetics Abstracts</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><collection>DOAJ Directory of Open Access Journals</collection><collection>PLoS Biology</collection><jtitle>PLoS biology</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Brodbeck, Christian</au><au>Jiao, Alex</au><au>Hong, L Elliot</au><au>Simon, Jonathan Z</au><au>Malmierca, Manuel S.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Neural speech restoration at the cocktail party: Auditory cortex recovers masked speech of both attended and ignored speakers</atitle><jtitle>PLoS biology</jtitle><addtitle>PLoS Biol</addtitle><date>2020-10-22</date><risdate>2020</risdate><volume>18</volume><issue>10</issue><spage>e3000883</spage><epage>e3000883</epage><pages>e3000883-e3000883</pages><issn>1545-7885</issn><issn>1544-9173</issn><eissn>1545-7885</eissn><abstract>Humans are remarkably skilled at listening to one speaker out of an acoustic mixture of several speech sources. Two speakers are easily segregated, even without binaural cues, but the neural mechanisms underlying this ability are not well understood. One possibility is that early cortical processing performs a spectrotemporal decomposition of the acoustic mixture, allowing the attended speech to be reconstructed via optimally weighted recombinations that discount spectrotemporal regions where sources heavily overlap. Using human magnetoencephalography (MEG) responses to a 2-talker mixture, we show evidence for an alternative possibility, in which early, active segregation occurs even for strongly spectrotemporally overlapping regions. Early (approximately 70-millisecond) responses to nonoverlapping spectrotemporal features are seen for both talkers. When competing talkers' spectrotemporal features mask each other, the individual representations persist, but they occur with an approximately 20-millisecond delay. This suggests that the auditory cortex recovers acoustic features that are masked in the mixture, even if they occurred in the ignored speech. The existence of such noise-robust cortical representations, of features present in attended as well as ignored speech, suggests an active cortical stream segregation process, which could explain a range of behavioral effects of ignored background speech.</abstract><cop>United States</cop><pub>Public Library of Science</pub><pmid>33091003</pmid><doi>10.1371/journal.pbio.3000883</doi><orcidid>https://orcid.org/0000-0003-0858-0698</orcidid><orcidid>https://orcid.org/0000-0001-8380-639X</orcidid><orcidid>https://orcid.org/0000-0002-1472-2659</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1545-7885 |
ispartof | PLoS biology, 2020-10, Vol.18 (10), p.e3000883-e3000883 |
issn | 1545-7885 1544-9173 1545-7885 |
language | eng |
recordid | cdi_plos_journals_2460095511 |
source | MEDLINE; DOAJ Directory of Open Access Journals; Public Library of Science (PLoS) Journals Open Access; EZB-FREE-00999 freely available EZB journals; PubMed Central |
subjects | Acoustic noise Acoustic Stimulation Acoustics Adult Attention - physiology Auditory cortex Auditory Cortex - physiology Auditory masking Biology and Life Sciences Brain research Computer engineering Cortex (auditory) Engineering and Technology Female Humans Hypotheses Magnetoencephalography Male Medicine and Health Sciences Middle Aged Models, Biological Physical Sciences Physiological aspects Psychological research Representations Research and Analysis Methods Segregation process Social Sciences Speech Speech - physiology Speech perception Time Factors Young Adult |
title | Neural speech restoration at the cocktail party: Auditory cortex recovers masked speech of both attended and ignored speakers |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-18T21%3A53%3A31IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_plos_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Neural%20speech%20restoration%20at%20the%20cocktail%20party:%20Auditory%20cortex%20recovers%20masked%20speech%20of%20both%20attended%20and%20ignored%20speakers&rft.jtitle=PLoS%20biology&rft.au=Brodbeck,%20Christian&rft.date=2020-10-22&rft.volume=18&rft.issue=10&rft.spage=e3000883&rft.epage=e3000883&rft.pages=e3000883-e3000883&rft.issn=1545-7885&rft.eissn=1545-7885&rft_id=info:doi/10.1371/journal.pbio.3000883&rft_dat=%3Cgale_plos_%3EA645323295%3C/gale_plos_%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2460095511&rft_id=info:pmid/33091003&rft_galeid=A645323295&rft_doaj_id=oai_doaj_org_article_c1f2f771b5cd4a0e8b19a25747d4ebc8&rfr_iscdi=true |