Involvement of right STS in audio-visual integration for affective speech demonstrated using MEG

Speech and emotion perception are dynamic processes in which it may be optimal to integrate synchronous signals emitted from different sources. Studies of audio-visual (AV) perception of neutrally expressed speech demonstrate supra-additive (i.e., where AV>[unimodal auditory+unimodal visual]) res...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:PloS one 2013-08, Vol.8 (8), p.e70648-e70648
Hauptverfasser: Hagan, Cindy C, Woods, Will, Johnson, Sam, Green, Gary G R, Young, Andrew W
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page e70648
container_issue 8
container_start_page e70648
container_title PloS one
container_volume 8
creator Hagan, Cindy C
Woods, Will
Johnson, Sam
Green, Gary G R
Young, Andrew W
description Speech and emotion perception are dynamic processes in which it may be optimal to integrate synchronous signals emitted from different sources. Studies of audio-visual (AV) perception of neutrally expressed speech demonstrate supra-additive (i.e., where AV>[unimodal auditory+unimodal visual]) responses in left STS to crossmodal speech stimuli. However, emotions are often conveyed simultaneously with speech; through the voice in the form of speech prosody and through the face in the form of facial expression. Previous studies of AV nonverbal emotion integration showed a role for right (rather than left) STS. The current study therefore examined whether the integration of facial and prosodic signals of emotional speech is associated with supra-additive responses in left (cf. results for speech integration) or right (due to emotional content) STS. As emotional displays are sometimes difficult to interpret, we also examined whether supra-additive responses were affected by emotional incongruence (i.e., ambiguity). Using magnetoencephalography, we continuously recorded eighteen participants as they viewed and heard AV congruent emotional and AV incongruent emotional speech stimuli. Significant supra-additive responses were observed in right STS within the first 250 ms for emotionally incongruent and emotionally congruent AV speech stimuli, which further underscores the role of right STS in processing crossmodal emotive signals.
doi_str_mv 10.1371/journal.pone.0070648
format Article
fullrecord <record><control><sourceid>gale_plos_</sourceid><recordid>TN_cdi_plos_journals_1430420985</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A478311050</galeid><doaj_id>oai_doaj_org_article_cb728782f4fe4dfe859db887979d78a9</doaj_id><sourcerecordid>A478311050</sourcerecordid><originalsourceid>FETCH-LOGICAL-c641t-e12c301e479b6a51529061c5e44e9aba11b8ebc2d612024ab460fe2c47d085f23</originalsourceid><addsrcrecordid>eNqNk99v0zAQxyMEYmPwHyCIhITgocW_YscvSNM0RqWhSXTwahznnLpK4xI7Ffz3uDSbGrQH5Afb5899zz7fZdlLjOaYCvxh7Ye-0-186zuYIyQQZ-Wj7BRLSmacIPr4aH2SPQthjVBBS86fZieEygJJIU6zH4tu59sdbKCLubd575pVzJe3y9x1uR5q52c7Fwbdpn2EptfR-S63vs-1tWCi20EetgBmldew8V2ICYE6H4LrmvzL5dXz7InVbYAX43yWfft0eXvxeXZ9c7W4OL-eGc5wnAEmhiIMTMiK6wIXRCKOTQGMgdSVxrgqoTKk5pggwnTFOLJADBM1KgtL6Fn2-qC7bX1QY3KCwowiRpAsi0QsDkTt9Vpte7fR_W_ltVN_Db5vlO6jMy0oUwlSipJYZoHVFspC1lVZCilkLUotk9bHMdpQbaA2KXu9biei05POrVTjd4oKhongSeDdKND7nwOEqDYuGGhb3YEf9vcmHCEuGU3om3_Qh183Uo1OD3Cd9Smu2YuqcyZKijEqUKLmD1BppM9zJlWSdck-cXg_cUhMhF-x0UMIarH8-v_szfcp-_aIXYFu4yr4dtiXV5iC7ACa3ofQg71PMkZq3wh32VD7RlBjIyS3V8cfdO90V_n0DxTvAn0</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1430420985</pqid></control><display><type>article</type><title>Involvement of right STS in audio-visual integration for affective speech demonstrated using MEG</title><source>MEDLINE</source><source>DOAJ Directory of Open Access Journals</source><source>Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals</source><source>PubMed Central</source><source>Free Full-Text Journals in Chemistry</source><source>Public Library of Science (PLoS)</source><creator>Hagan, Cindy C ; Woods, Will ; Johnson, Sam ; Green, Gary G R ; Young, Andrew W</creator><contributor>van Wassenhove, Virginie</contributor><creatorcontrib>Hagan, Cindy C ; Woods, Will ; Johnson, Sam ; Green, Gary G R ; Young, Andrew W ; van Wassenhove, Virginie</creatorcontrib><description>Speech and emotion perception are dynamic processes in which it may be optimal to integrate synchronous signals emitted from different sources. Studies of audio-visual (AV) perception of neutrally expressed speech demonstrate supra-additive (i.e., where AV&gt;[unimodal auditory+unimodal visual]) responses in left STS to crossmodal speech stimuli. However, emotions are often conveyed simultaneously with speech; through the voice in the form of speech prosody and through the face in the form of facial expression. Previous studies of AV nonverbal emotion integration showed a role for right (rather than left) STS. The current study therefore examined whether the integration of facial and prosodic signals of emotional speech is associated with supra-additive responses in left (cf. results for speech integration) or right (due to emotional content) STS. As emotional displays are sometimes difficult to interpret, we also examined whether supra-additive responses were affected by emotional incongruence (i.e., ambiguity). Using magnetoencephalography, we continuously recorded eighteen participants as they viewed and heard AV congruent emotional and AV incongruent emotional speech stimuli. Significant supra-additive responses were observed in right STS within the first 250 ms for emotionally incongruent and emotionally congruent AV speech stimuli, which further underscores the role of right STS in processing crossmodal emotive signals.</description><identifier>ISSN: 1932-6203</identifier><identifier>EISSN: 1932-6203</identifier><identifier>DOI: 10.1371/journal.pone.0070648</identifier><identifier>PMID: 23950977</identifier><language>eng</language><publisher>United States: Public Library of Science</publisher><subject>Acoustic Stimulation ; Adolescent ; Adult ; Audio visual equipment ; Auditory Perception - physiology ; Biology ; Brain Mapping ; Brain research ; Brain Waves ; Emotions ; Emotions - physiology ; Female ; Humans ; Information processing ; Integration ; Linguistics ; Magnetic Resonance Imaging ; Magnetoencephalography ; Male ; Medical imaging ; Perception ; Photic Stimulation ; Psychology ; Semantics ; Sensory integration ; Social and Behavioral Sciences ; Speech ; Speech perception ; Studies ; Trends ; Visual perception ; Visual Perception - physiology ; Visual signals ; Visual stimuli ; Young Adult</subject><ispartof>PloS one, 2013-08, Vol.8 (8), p.e70648-e70648</ispartof><rights>COPYRIGHT 2013 Public Library of Science</rights><rights>2013 Hagan et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License: https://creativecommons.org/licenses/by/4.0/ (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>2013 Hagan et al 2013 Hagan et al</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c641t-e12c301e479b6a51529061c5e44e9aba11b8ebc2d612024ab460fe2c47d085f23</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC3741276/pdf/$$EPDF$$P50$$Gpubmedcentral$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC3741276/$$EHTML$$P50$$Gpubmedcentral$$Hfree_for_read</linktohtml><link.rule.ids>230,314,723,776,780,860,881,2096,2915,23845,27901,27902,53766,53768,79343,79344</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/23950977$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><contributor>van Wassenhove, Virginie</contributor><creatorcontrib>Hagan, Cindy C</creatorcontrib><creatorcontrib>Woods, Will</creatorcontrib><creatorcontrib>Johnson, Sam</creatorcontrib><creatorcontrib>Green, Gary G R</creatorcontrib><creatorcontrib>Young, Andrew W</creatorcontrib><title>Involvement of right STS in audio-visual integration for affective speech demonstrated using MEG</title><title>PloS one</title><addtitle>PLoS One</addtitle><description>Speech and emotion perception are dynamic processes in which it may be optimal to integrate synchronous signals emitted from different sources. Studies of audio-visual (AV) perception of neutrally expressed speech demonstrate supra-additive (i.e., where AV&gt;[unimodal auditory+unimodal visual]) responses in left STS to crossmodal speech stimuli. However, emotions are often conveyed simultaneously with speech; through the voice in the form of speech prosody and through the face in the form of facial expression. Previous studies of AV nonverbal emotion integration showed a role for right (rather than left) STS. The current study therefore examined whether the integration of facial and prosodic signals of emotional speech is associated with supra-additive responses in left (cf. results for speech integration) or right (due to emotional content) STS. As emotional displays are sometimes difficult to interpret, we also examined whether supra-additive responses were affected by emotional incongruence (i.e., ambiguity). Using magnetoencephalography, we continuously recorded eighteen participants as they viewed and heard AV congruent emotional and AV incongruent emotional speech stimuli. Significant supra-additive responses were observed in right STS within the first 250 ms for emotionally incongruent and emotionally congruent AV speech stimuli, which further underscores the role of right STS in processing crossmodal emotive signals.</description><subject>Acoustic Stimulation</subject><subject>Adolescent</subject><subject>Adult</subject><subject>Audio visual equipment</subject><subject>Auditory Perception - physiology</subject><subject>Biology</subject><subject>Brain Mapping</subject><subject>Brain research</subject><subject>Brain Waves</subject><subject>Emotions</subject><subject>Emotions - physiology</subject><subject>Female</subject><subject>Humans</subject><subject>Information processing</subject><subject>Integration</subject><subject>Linguistics</subject><subject>Magnetic Resonance Imaging</subject><subject>Magnetoencephalography</subject><subject>Male</subject><subject>Medical imaging</subject><subject>Perception</subject><subject>Photic Stimulation</subject><subject>Psychology</subject><subject>Semantics</subject><subject>Sensory integration</subject><subject>Social and Behavioral Sciences</subject><subject>Speech</subject><subject>Speech perception</subject><subject>Studies</subject><subject>Trends</subject><subject>Visual perception</subject><subject>Visual Perception - physiology</subject><subject>Visual signals</subject><subject>Visual stimuli</subject><subject>Young Adult</subject><issn>1932-6203</issn><issn>1932-6203</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2013</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><sourceid>BENPR</sourceid><sourceid>DOA</sourceid><recordid>eNqNk99v0zAQxyMEYmPwHyCIhITgocW_YscvSNM0RqWhSXTwahznnLpK4xI7Ffz3uDSbGrQH5Afb5899zz7fZdlLjOaYCvxh7Ye-0-186zuYIyQQZ-Wj7BRLSmacIPr4aH2SPQthjVBBS86fZieEygJJIU6zH4tu59sdbKCLubd575pVzJe3y9x1uR5q52c7Fwbdpn2EptfR-S63vs-1tWCi20EetgBmldew8V2ICYE6H4LrmvzL5dXz7InVbYAX43yWfft0eXvxeXZ9c7W4OL-eGc5wnAEmhiIMTMiK6wIXRCKOTQGMgdSVxrgqoTKk5pggwnTFOLJADBM1KgtL6Fn2-qC7bX1QY3KCwowiRpAsi0QsDkTt9Vpte7fR_W_ltVN_Db5vlO6jMy0oUwlSipJYZoHVFspC1lVZCilkLUotk9bHMdpQbaA2KXu9biei05POrVTjd4oKhongSeDdKND7nwOEqDYuGGhb3YEf9vcmHCEuGU3om3_Qh183Uo1OD3Cd9Smu2YuqcyZKijEqUKLmD1BppM9zJlWSdck-cXg_cUhMhF-x0UMIarH8-v_szfcp-_aIXYFu4yr4dtiXV5iC7ACa3ofQg71PMkZq3wh32VD7RlBjIyS3V8cfdO90V_n0DxTvAn0</recordid><startdate>20130812</startdate><enddate>20130812</enddate><creator>Hagan, Cindy C</creator><creator>Woods, Will</creator><creator>Johnson, Sam</creator><creator>Green, Gary G R</creator><creator>Young, Andrew W</creator><general>Public Library of Science</general><general>Public Library of Science (PLoS)</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>IOV</scope><scope>ISR</scope><scope>3V.</scope><scope>7QG</scope><scope>7QL</scope><scope>7QO</scope><scope>7RV</scope><scope>7SN</scope><scope>7SS</scope><scope>7T5</scope><scope>7TG</scope><scope>7TM</scope><scope>7U9</scope><scope>7X2</scope><scope>7X7</scope><scope>7XB</scope><scope>88E</scope><scope>8AO</scope><scope>8C1</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FH</scope><scope>8FI</scope><scope>8FJ</scope><scope>8FK</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AEUYN</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>ATCPS</scope><scope>AZQEC</scope><scope>BBNVY</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>BHPHI</scope><scope>C1K</scope><scope>CCPQU</scope><scope>D1I</scope><scope>DWQXO</scope><scope>FR3</scope><scope>FYUFA</scope><scope>GHDGH</scope><scope>GNUQQ</scope><scope>H94</scope><scope>HCIFZ</scope><scope>K9.</scope><scope>KB.</scope><scope>KB0</scope><scope>KL.</scope><scope>L6V</scope><scope>LK8</scope><scope>M0K</scope><scope>M0S</scope><scope>M1P</scope><scope>M7N</scope><scope>M7P</scope><scope>M7S</scope><scope>NAPCQ</scope><scope>P5Z</scope><scope>P62</scope><scope>P64</scope><scope>PATMY</scope><scope>PDBOC</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>PYCSY</scope><scope>RC3</scope><scope>7X8</scope><scope>5PM</scope><scope>DOA</scope></search><sort><creationdate>20130812</creationdate><title>Involvement of right STS in audio-visual integration for affective speech demonstrated using MEG</title><author>Hagan, Cindy C ; Woods, Will ; Johnson, Sam ; Green, Gary G R ; Young, Andrew W</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c641t-e12c301e479b6a51529061c5e44e9aba11b8ebc2d612024ab460fe2c47d085f23</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2013</creationdate><topic>Acoustic Stimulation</topic><topic>Adolescent</topic><topic>Adult</topic><topic>Audio visual equipment</topic><topic>Auditory Perception - physiology</topic><topic>Biology</topic><topic>Brain Mapping</topic><topic>Brain research</topic><topic>Brain Waves</topic><topic>Emotions</topic><topic>Emotions - physiology</topic><topic>Female</topic><topic>Humans</topic><topic>Information processing</topic><topic>Integration</topic><topic>Linguistics</topic><topic>Magnetic Resonance Imaging</topic><topic>Magnetoencephalography</topic><topic>Male</topic><topic>Medical imaging</topic><topic>Perception</topic><topic>Photic Stimulation</topic><topic>Psychology</topic><topic>Semantics</topic><topic>Sensory integration</topic><topic>Social and Behavioral Sciences</topic><topic>Speech</topic><topic>Speech perception</topic><topic>Studies</topic><topic>Trends</topic><topic>Visual perception</topic><topic>Visual Perception - physiology</topic><topic>Visual signals</topic><topic>Visual stimuli</topic><topic>Young Adult</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Hagan, Cindy C</creatorcontrib><creatorcontrib>Woods, Will</creatorcontrib><creatorcontrib>Johnson, Sam</creatorcontrib><creatorcontrib>Green, Gary G R</creatorcontrib><creatorcontrib>Young, Andrew W</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Gale In Context: Opposing Viewpoints</collection><collection>Gale In Context: Science</collection><collection>ProQuest Central (Corporate)</collection><collection>Animal Behavior Abstracts</collection><collection>Bacteriology Abstracts (Microbiology B)</collection><collection>Biotechnology Research Abstracts</collection><collection>Nursing &amp; Allied Health Database</collection><collection>Ecology Abstracts</collection><collection>Entomology Abstracts (Full archive)</collection><collection>Immunology Abstracts</collection><collection>Meteorological &amp; Geoastrophysical Abstracts</collection><collection>Nucleic Acids Abstracts</collection><collection>Virology and AIDS Abstracts</collection><collection>Agricultural Science Collection</collection><collection>Health &amp; Medical Collection</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Medical Database (Alumni Edition)</collection><collection>ProQuest Pharma Collection</collection><collection>Public Health Database</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Natural Science Collection</collection><collection>Hospital Premium Collection</collection><collection>Hospital Premium Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest One Sustainability</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>Agricultural &amp; Environmental Science Collection</collection><collection>ProQuest Central Essentials</collection><collection>Biological Science Collection</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>Natural Science Collection</collection><collection>Environmental Sciences and Pollution Management</collection><collection>ProQuest One Community College</collection><collection>ProQuest Materials Science Collection</collection><collection>ProQuest Central Korea</collection><collection>Engineering Research Database</collection><collection>Health Research Premium Collection</collection><collection>Health Research Premium Collection (Alumni)</collection><collection>ProQuest Central Student</collection><collection>AIDS and Cancer Research Abstracts</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Health &amp; Medical Complete (Alumni)</collection><collection>Materials Science Database</collection><collection>Nursing &amp; Allied Health Database (Alumni Edition)</collection><collection>Meteorological &amp; Geoastrophysical Abstracts - Academic</collection><collection>ProQuest Engineering Collection</collection><collection>ProQuest Biological Science Collection</collection><collection>Agricultural Science Database</collection><collection>Health &amp; Medical Collection (Alumni Edition)</collection><collection>Medical Database</collection><collection>Algology Mycology and Protozoology Abstracts (Microbiology C)</collection><collection>Biological Science Database</collection><collection>Engineering Database</collection><collection>Nursing &amp; Allied Health Premium</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>Environmental Science Database</collection><collection>Materials Science Collection</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>Environmental Science Collection</collection><collection>Genetics Abstracts</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>PloS one</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Hagan, Cindy C</au><au>Woods, Will</au><au>Johnson, Sam</au><au>Green, Gary G R</au><au>Young, Andrew W</au><au>van Wassenhove, Virginie</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Involvement of right STS in audio-visual integration for affective speech demonstrated using MEG</atitle><jtitle>PloS one</jtitle><addtitle>PLoS One</addtitle><date>2013-08-12</date><risdate>2013</risdate><volume>8</volume><issue>8</issue><spage>e70648</spage><epage>e70648</epage><pages>e70648-e70648</pages><issn>1932-6203</issn><eissn>1932-6203</eissn><abstract>Speech and emotion perception are dynamic processes in which it may be optimal to integrate synchronous signals emitted from different sources. Studies of audio-visual (AV) perception of neutrally expressed speech demonstrate supra-additive (i.e., where AV&gt;[unimodal auditory+unimodal visual]) responses in left STS to crossmodal speech stimuli. However, emotions are often conveyed simultaneously with speech; through the voice in the form of speech prosody and through the face in the form of facial expression. Previous studies of AV nonverbal emotion integration showed a role for right (rather than left) STS. The current study therefore examined whether the integration of facial and prosodic signals of emotional speech is associated with supra-additive responses in left (cf. results for speech integration) or right (due to emotional content) STS. As emotional displays are sometimes difficult to interpret, we also examined whether supra-additive responses were affected by emotional incongruence (i.e., ambiguity). Using magnetoencephalography, we continuously recorded eighteen participants as they viewed and heard AV congruent emotional and AV incongruent emotional speech stimuli. Significant supra-additive responses were observed in right STS within the first 250 ms for emotionally incongruent and emotionally congruent AV speech stimuli, which further underscores the role of right STS in processing crossmodal emotive signals.</abstract><cop>United States</cop><pub>Public Library of Science</pub><pmid>23950977</pmid><doi>10.1371/journal.pone.0070648</doi><tpages>e70648</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1932-6203
ispartof PloS one, 2013-08, Vol.8 (8), p.e70648-e70648
issn 1932-6203
1932-6203
language eng
recordid cdi_plos_journals_1430420985
source MEDLINE; DOAJ Directory of Open Access Journals; Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals; PubMed Central; Free Full-Text Journals in Chemistry; Public Library of Science (PLoS)
subjects Acoustic Stimulation
Adolescent
Adult
Audio visual equipment
Auditory Perception - physiology
Biology
Brain Mapping
Brain research
Brain Waves
Emotions
Emotions - physiology
Female
Humans
Information processing
Integration
Linguistics
Magnetic Resonance Imaging
Magnetoencephalography
Male
Medical imaging
Perception
Photic Stimulation
Psychology
Semantics
Sensory integration
Social and Behavioral Sciences
Speech
Speech perception
Studies
Trends
Visual perception
Visual Perception - physiology
Visual signals
Visual stimuli
Young Adult
title Involvement of right STS in audio-visual integration for affective speech demonstrated using MEG
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-31T09%3A48%3A47IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_plos_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Involvement%20of%20right%20STS%20in%20audio-visual%20integration%20for%20affective%20speech%20demonstrated%20using%20MEG&rft.jtitle=PloS%20one&rft.au=Hagan,%20Cindy%20C&rft.date=2013-08-12&rft.volume=8&rft.issue=8&rft.spage=e70648&rft.epage=e70648&rft.pages=e70648-e70648&rft.issn=1932-6203&rft.eissn=1932-6203&rft_id=info:doi/10.1371/journal.pone.0070648&rft_dat=%3Cgale_plos_%3EA478311050%3C/gale_plos_%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1430420985&rft_id=info:pmid/23950977&rft_galeid=A478311050&rft_doaj_id=oai_doaj_org_article_cb728782f4fe4dfe859db887979d78a9&rfr_iscdi=true