Cortical integration of audio–visual speech and non-speech stimuli

Using fMRI we investigated the neural basis of audio–visual processing of speech and non-speech stimuli using physically similar auditory stimuli (speech and sinusoidal tones) and visual stimuli (animated circles and ellipses). Relative to uni-modal stimuli, the different multi-modal stimuli showed...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Brain and cognition 2010-11, Vol.74 (2), p.97-106
Hauptverfasser: Wyk, Brent C. Vander, Ramsay, Gordon J., Hudac, Caitlin M., Jones, Warren, Lin, David, Klin, Ami, Lee, Su Mei, Pelphrey, Kevin A.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 106
container_issue 2
container_start_page 97
container_title Brain and cognition
container_volume 74
creator Wyk, Brent C. Vander
Ramsay, Gordon J.
Hudac, Caitlin M.
Jones, Warren
Lin, David
Klin, Ami
Lee, Su Mei
Pelphrey, Kevin A.
description Using fMRI we investigated the neural basis of audio–visual processing of speech and non-speech stimuli using physically similar auditory stimuli (speech and sinusoidal tones) and visual stimuli (animated circles and ellipses). Relative to uni-modal stimuli, the different multi-modal stimuli showed increased activation in largely non-overlapping areas. Ellipse-Speech, which most resembles naturalistic audio–visual speech, showed higher activation in the right inferior frontal gyrus, fusiform gyri, left posterior superior temporal sulcus, and lateral occipital cortex. Circle-Tone, an arbitrary audio–visual pairing with no speech association, activated middle temporal gyri and lateral occipital cortex. Circle-Speech showed activation in lateral occipital cortex, and Ellipse-Tone did not show increased activation relative to uni-modal stimuli. Further analysis revealed that middle temporal regions, although identified as multi-modal only in the Circle-Tone condition, were more strongly active to Ellipse-Speech or Circle-Speech, but regions that were identified as multi-modal for Ellipse-Speech were always strongest for Ellipse-Speech. Our results suggest that combinations of auditory and visual stimuli may together be processed by different cortical networks, depending on the extent to which multi-modal speech or non-speech percepts are evoked.
doi_str_mv 10.1016/j.bandc.2010.07.002
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_954592223</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0278262610000898</els_id><sourcerecordid>754019232</sourcerecordid><originalsourceid>FETCH-LOGICAL-c531t-76fb2a34e570837ea01092d241bec839b95b63f97c8b9e957b44ee405cf208183</originalsourceid><addsrcrecordid>eNqFkM9u1DAQhy1ERZeWJ0BCuSBOWcZjO44PPVTLv0qVeoGz5TgT8CobL3ZSqbe-A2_Ik-B2F7jByfb4m9-MPsZeclhz4M3b7bpzU-_XCKUCeg2AT9iKg4EaudRP2QpQtzU22Jyy5zlvAcBIxGfsFEGXq8QVe7eJaQ7ejVWYZvqa3BziVMWhcksf4s_7H7chL-U374n8t6oMrKY41cdnnsNuGcM5OxncmOnF8TxjXz68_7z5VF_ffLzaXF7XXgk-17oZOnRCktLQCk2uLG6wR8k78q0wnVFdIwajfdsZMkp3UhJJUH5AaHkrztibQ-4-xe8L5dnuQvY0jm6iuGRrlFQGEcV_Sa0kcIMCCykOpE8x50SD3aewc-nOcrAPnu3WPnq2D54taFs8l65Xx_yl21H_p-e32AK8PgIuF7tDcpMP-S8nsDFNqwp3ceCoeLsNlGz2gSZPfUjkZ9vH8M9FfgFr85u_</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>754019232</pqid></control><display><type>article</type><title>Cortical integration of audio–visual speech and non-speech stimuli</title><source>MEDLINE</source><source>Elsevier ScienceDirect Journals</source><creator>Wyk, Brent C. Vander ; Ramsay, Gordon J. ; Hudac, Caitlin M. ; Jones, Warren ; Lin, David ; Klin, Ami ; Lee, Su Mei ; Pelphrey, Kevin A.</creator><creatorcontrib>Wyk, Brent C. Vander ; Ramsay, Gordon J. ; Hudac, Caitlin M. ; Jones, Warren ; Lin, David ; Klin, Ami ; Lee, Su Mei ; Pelphrey, Kevin A.</creatorcontrib><description>Using fMRI we investigated the neural basis of audio–visual processing of speech and non-speech stimuli using physically similar auditory stimuli (speech and sinusoidal tones) and visual stimuli (animated circles and ellipses). Relative to uni-modal stimuli, the different multi-modal stimuli showed increased activation in largely non-overlapping areas. Ellipse-Speech, which most resembles naturalistic audio–visual speech, showed higher activation in the right inferior frontal gyrus, fusiform gyri, left posterior superior temporal sulcus, and lateral occipital cortex. Circle-Tone, an arbitrary audio–visual pairing with no speech association, activated middle temporal gyri and lateral occipital cortex. Circle-Speech showed activation in lateral occipital cortex, and Ellipse-Tone did not show increased activation relative to uni-modal stimuli. Further analysis revealed that middle temporal regions, although identified as multi-modal only in the Circle-Tone condition, were more strongly active to Ellipse-Speech or Circle-Speech, but regions that were identified as multi-modal for Ellipse-Speech were always strongest for Ellipse-Speech. Our results suggest that combinations of auditory and visual stimuli may together be processed by different cortical networks, depending on the extent to which multi-modal speech or non-speech percepts are evoked.</description><identifier>ISSN: 0278-2626</identifier><identifier>EISSN: 1090-2147</identifier><identifier>DOI: 10.1016/j.bandc.2010.07.002</identifier><identifier>PMID: 20709442</identifier><identifier>CODEN: BRCOEI</identifier><language>eng</language><publisher>Amsterdam: Elsevier Inc</publisher><subject>Acoustic Stimulation ; Adult ; Analysis of Variance ; Anatomical correlates of behavior ; Audio–visual ; Auditory Perception - physiology ; Behavioral psychophysiology ; Biological and medical sciences ; Brain Mapping ; Cerebral Cortex - physiology ; fMRI ; Fundamental and applied biological sciences. Psychology ; Humans ; Image Processing, Computer-Assisted ; Language ; Magnetic Resonance Imaging ; Multi-modal processing ; Production and perception of spoken language ; Psychology. Psychoanalysis. Psychiatry ; Psychology. Psychophysiology ; Speech ; Visual Perception - physiology</subject><ispartof>Brain and cognition, 2010-11, Vol.74 (2), p.97-106</ispartof><rights>2010 Elsevier Inc.</rights><rights>2015 INIST-CNRS</rights><rights>Copyright 2010 Elsevier Inc. All rights reserved.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c531t-76fb2a34e570837ea01092d241bec839b95b63f97c8b9e957b44ee405cf208183</citedby><cites>FETCH-LOGICAL-c531t-76fb2a34e570837ea01092d241bec839b95b63f97c8b9e957b44ee405cf208183</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.sciencedirect.com/science/article/pii/S0278262610000898$$EHTML$$P50$$Gelsevier$$H</linktohtml><link.rule.ids>314,776,780,3537,27901,27902,65306</link.rule.ids><backlink>$$Uhttp://pascal-francis.inist.fr/vibad/index.php?action=getRecordDetail&amp;idt=23269685$$DView record in Pascal Francis$$Hfree_for_read</backlink><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/20709442$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Wyk, Brent C. Vander</creatorcontrib><creatorcontrib>Ramsay, Gordon J.</creatorcontrib><creatorcontrib>Hudac, Caitlin M.</creatorcontrib><creatorcontrib>Jones, Warren</creatorcontrib><creatorcontrib>Lin, David</creatorcontrib><creatorcontrib>Klin, Ami</creatorcontrib><creatorcontrib>Lee, Su Mei</creatorcontrib><creatorcontrib>Pelphrey, Kevin A.</creatorcontrib><title>Cortical integration of audio–visual speech and non-speech stimuli</title><title>Brain and cognition</title><addtitle>Brain Cogn</addtitle><description>Using fMRI we investigated the neural basis of audio–visual processing of speech and non-speech stimuli using physically similar auditory stimuli (speech and sinusoidal tones) and visual stimuli (animated circles and ellipses). Relative to uni-modal stimuli, the different multi-modal stimuli showed increased activation in largely non-overlapping areas. Ellipse-Speech, which most resembles naturalistic audio–visual speech, showed higher activation in the right inferior frontal gyrus, fusiform gyri, left posterior superior temporal sulcus, and lateral occipital cortex. Circle-Tone, an arbitrary audio–visual pairing with no speech association, activated middle temporal gyri and lateral occipital cortex. Circle-Speech showed activation in lateral occipital cortex, and Ellipse-Tone did not show increased activation relative to uni-modal stimuli. Further analysis revealed that middle temporal regions, although identified as multi-modal only in the Circle-Tone condition, were more strongly active to Ellipse-Speech or Circle-Speech, but regions that were identified as multi-modal for Ellipse-Speech were always strongest for Ellipse-Speech. Our results suggest that combinations of auditory and visual stimuli may together be processed by different cortical networks, depending on the extent to which multi-modal speech or non-speech percepts are evoked.</description><subject>Acoustic Stimulation</subject><subject>Adult</subject><subject>Analysis of Variance</subject><subject>Anatomical correlates of behavior</subject><subject>Audio–visual</subject><subject>Auditory Perception - physiology</subject><subject>Behavioral psychophysiology</subject><subject>Biological and medical sciences</subject><subject>Brain Mapping</subject><subject>Cerebral Cortex - physiology</subject><subject>fMRI</subject><subject>Fundamental and applied biological sciences. Psychology</subject><subject>Humans</subject><subject>Image Processing, Computer-Assisted</subject><subject>Language</subject><subject>Magnetic Resonance Imaging</subject><subject>Multi-modal processing</subject><subject>Production and perception of spoken language</subject><subject>Psychology. Psychoanalysis. Psychiatry</subject><subject>Psychology. Psychophysiology</subject><subject>Speech</subject><subject>Visual Perception - physiology</subject><issn>0278-2626</issn><issn>1090-2147</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2010</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><recordid>eNqFkM9u1DAQhy1ERZeWJ0BCuSBOWcZjO44PPVTLv0qVeoGz5TgT8CobL3ZSqbe-A2_Ik-B2F7jByfb4m9-MPsZeclhz4M3b7bpzU-_XCKUCeg2AT9iKg4EaudRP2QpQtzU22Jyy5zlvAcBIxGfsFEGXq8QVe7eJaQ7ejVWYZvqa3BziVMWhcksf4s_7H7chL-U374n8t6oMrKY41cdnnsNuGcM5OxncmOnF8TxjXz68_7z5VF_ffLzaXF7XXgk-17oZOnRCktLQCk2uLG6wR8k78q0wnVFdIwajfdsZMkp3UhJJUH5AaHkrztibQ-4-xe8L5dnuQvY0jm6iuGRrlFQGEcV_Sa0kcIMCCykOpE8x50SD3aewc-nOcrAPnu3WPnq2D54taFs8l65Xx_yl21H_p-e32AK8PgIuF7tDcpMP-S8nsDFNqwp3ceCoeLsNlGz2gSZPfUjkZ9vH8M9FfgFr85u_</recordid><startdate>20101101</startdate><enddate>20101101</enddate><creator>Wyk, Brent C. Vander</creator><creator>Ramsay, Gordon J.</creator><creator>Hudac, Caitlin M.</creator><creator>Jones, Warren</creator><creator>Lin, David</creator><creator>Klin, Ami</creator><creator>Lee, Su Mei</creator><creator>Pelphrey, Kevin A.</creator><general>Elsevier Inc</general><general>Elsevier</general><scope>IQODW</scope><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><scope>7QG</scope><scope>7TK</scope></search><sort><creationdate>20101101</creationdate><title>Cortical integration of audio–visual speech and non-speech stimuli</title><author>Wyk, Brent C. Vander ; Ramsay, Gordon J. ; Hudac, Caitlin M. ; Jones, Warren ; Lin, David ; Klin, Ami ; Lee, Su Mei ; Pelphrey, Kevin A.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c531t-76fb2a34e570837ea01092d241bec839b95b63f97c8b9e957b44ee405cf208183</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2010</creationdate><topic>Acoustic Stimulation</topic><topic>Adult</topic><topic>Analysis of Variance</topic><topic>Anatomical correlates of behavior</topic><topic>Audio–visual</topic><topic>Auditory Perception - physiology</topic><topic>Behavioral psychophysiology</topic><topic>Biological and medical sciences</topic><topic>Brain Mapping</topic><topic>Cerebral Cortex - physiology</topic><topic>fMRI</topic><topic>Fundamental and applied biological sciences. Psychology</topic><topic>Humans</topic><topic>Image Processing, Computer-Assisted</topic><topic>Language</topic><topic>Magnetic Resonance Imaging</topic><topic>Multi-modal processing</topic><topic>Production and perception of spoken language</topic><topic>Psychology. Psychoanalysis. Psychiatry</topic><topic>Psychology. Psychophysiology</topic><topic>Speech</topic><topic>Visual Perception - physiology</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Wyk, Brent C. Vander</creatorcontrib><creatorcontrib>Ramsay, Gordon J.</creatorcontrib><creatorcontrib>Hudac, Caitlin M.</creatorcontrib><creatorcontrib>Jones, Warren</creatorcontrib><creatorcontrib>Lin, David</creatorcontrib><creatorcontrib>Klin, Ami</creatorcontrib><creatorcontrib>Lee, Su Mei</creatorcontrib><creatorcontrib>Pelphrey, Kevin A.</creatorcontrib><collection>Pascal-Francis</collection><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><collection>Animal Behavior Abstracts</collection><collection>Neurosciences Abstracts</collection><jtitle>Brain and cognition</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Wyk, Brent C. Vander</au><au>Ramsay, Gordon J.</au><au>Hudac, Caitlin M.</au><au>Jones, Warren</au><au>Lin, David</au><au>Klin, Ami</au><au>Lee, Su Mei</au><au>Pelphrey, Kevin A.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Cortical integration of audio–visual speech and non-speech stimuli</atitle><jtitle>Brain and cognition</jtitle><addtitle>Brain Cogn</addtitle><date>2010-11-01</date><risdate>2010</risdate><volume>74</volume><issue>2</issue><spage>97</spage><epage>106</epage><pages>97-106</pages><issn>0278-2626</issn><eissn>1090-2147</eissn><coden>BRCOEI</coden><abstract>Using fMRI we investigated the neural basis of audio–visual processing of speech and non-speech stimuli using physically similar auditory stimuli (speech and sinusoidal tones) and visual stimuli (animated circles and ellipses). Relative to uni-modal stimuli, the different multi-modal stimuli showed increased activation in largely non-overlapping areas. Ellipse-Speech, which most resembles naturalistic audio–visual speech, showed higher activation in the right inferior frontal gyrus, fusiform gyri, left posterior superior temporal sulcus, and lateral occipital cortex. Circle-Tone, an arbitrary audio–visual pairing with no speech association, activated middle temporal gyri and lateral occipital cortex. Circle-Speech showed activation in lateral occipital cortex, and Ellipse-Tone did not show increased activation relative to uni-modal stimuli. Further analysis revealed that middle temporal regions, although identified as multi-modal only in the Circle-Tone condition, were more strongly active to Ellipse-Speech or Circle-Speech, but regions that were identified as multi-modal for Ellipse-Speech were always strongest for Ellipse-Speech. Our results suggest that combinations of auditory and visual stimuli may together be processed by different cortical networks, depending on the extent to which multi-modal speech or non-speech percepts are evoked.</abstract><cop>Amsterdam</cop><pub>Elsevier Inc</pub><pmid>20709442</pmid><doi>10.1016/j.bandc.2010.07.002</doi><tpages>10</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0278-2626
ispartof Brain and cognition, 2010-11, Vol.74 (2), p.97-106
issn 0278-2626
1090-2147
language eng
recordid cdi_proquest_miscellaneous_954592223
source MEDLINE; Elsevier ScienceDirect Journals
subjects Acoustic Stimulation
Adult
Analysis of Variance
Anatomical correlates of behavior
Audio–visual
Auditory Perception - physiology
Behavioral psychophysiology
Biological and medical sciences
Brain Mapping
Cerebral Cortex - physiology
fMRI
Fundamental and applied biological sciences. Psychology
Humans
Image Processing, Computer-Assisted
Language
Magnetic Resonance Imaging
Multi-modal processing
Production and perception of spoken language
Psychology. Psychoanalysis. Psychiatry
Psychology. Psychophysiology
Speech
Visual Perception - physiology
title Cortical integration of audio–visual speech and non-speech stimuli
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-30T08%3A33%3A30IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Cortical%20integration%20of%20audio%E2%80%93visual%20speech%20and%20non-speech%20stimuli&rft.jtitle=Brain%20and%20cognition&rft.au=Wyk,%20Brent%20C.%20Vander&rft.date=2010-11-01&rft.volume=74&rft.issue=2&rft.spage=97&rft.epage=106&rft.pages=97-106&rft.issn=0278-2626&rft.eissn=1090-2147&rft.coden=BRCOEI&rft_id=info:doi/10.1016/j.bandc.2010.07.002&rft_dat=%3Cproquest_cross%3E754019232%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=754019232&rft_id=info:pmid/20709442&rft_els_id=S0278262610000898&rfr_iscdi=true