Visual scanning patterns of a talking face when evaluating phonetic information in a native and non-native language

When comprehending speech, listeners can use information encoded in visual cues from a face to enhance auditory speech comprehension. For example, prior work has shown that the mouth movements reflect articulatory features of speech segments and durational information, while pitch and speech amplitu...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:PloS one 2024-05, Vol.19 (5), p.e0304150-e0304150
Hauptverfasser: Deng, Xizi, McClay, Elise, Jastrzebski, Erin, Wang, Yue, Yeung, H Henny
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page e0304150
container_issue 5
container_start_page e0304150
container_title PloS one
container_volume 19
creator Deng, Xizi
McClay, Elise
Jastrzebski, Erin
Wang, Yue
Yeung, H Henny
description When comprehending speech, listeners can use information encoded in visual cues from a face to enhance auditory speech comprehension. For example, prior work has shown that the mouth movements reflect articulatory features of speech segments and durational information, while pitch and speech amplitude are primarily cued by eyebrow and head movements. Little is known about how the visual perception of segmental and prosodic speech information is influenced by linguistic experience. Using eye-tracking, we studied how perceivers' visual scanning of different regions on a talking face predicts accuracy in a task targeting both segmental versus prosodic information, and also asked how this was influenced by language familiarity. Twenty-four native English perceivers heard two audio sentences in either English or Mandarin (an unfamiliar, non-native language), which sometimes differed in segmental or prosodic information (or both). Perceivers then saw a silent video of a talking face, and judged whether that video matched either the first or second audio sentence (or whether both sentences were the same). First, increased looking to the mouth predicted correct responses only for non-native language trials. Second, the start of a successful search for speech information in the mouth area was significantly delayed in non-native versus native trials, but just when there were only prosodic differences in the auditory sentences, and not when there were segmental differences. Third, (in correct trials) the saccade amplitude in native language trials was significantly greater than in non-native trials, indicating more intensely focused fixations in the latter. Taken together, these results suggest that mouth-looking was generally more evident when processing a non-native versus native language in all analyses, but fascinatingly, when measuring perceivers' latency to fixate the mouth, this language effect was largest in trials where only prosodic information was useful for the task.
doi_str_mv 10.1371/journal.pone.0304150
format Article
fullrecord <record><control><sourceid>gale_plos_</sourceid><recordid>TN_cdi_plos_journals_3069289570</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A795584745</galeid><doaj_id>oai_doaj_org_article_557f1b08244c4ac18a94663d271057b5</doaj_id><sourcerecordid>A795584745</sourcerecordid><originalsourceid>FETCH-LOGICAL-c572t-994d8c136768bb6878c3318d6121c8222a64eae355ac825a7f8d2c987c72f0943</originalsourceid><addsrcrecordid>eNqNk0tv1DAQxyMEoqXwDRBEQkJw2MVvOydUVTxWqlSJR6_WrONks3jtJU6W9tvj7KbVBvWAcsh48pv_eGYyWfYSozmmEn9Yh7714Obb4O0cUcQwR4-yU1xQMhME0cdH9kn2LMY1QpwqIZ5mJ1QpxBmTp1m8bmIPLo8GvG98nW-h62zrYx6qHPIO3K_BW4Gx-Z-V9bndgeuh26OrlLprTN74KrSb5Aw-2SnMJ3tnc_Bl7oOfjUcHvu6hts-zJxW4aF-M77Ps5-dPPy6-zi6vviwuzi9nhkvSzYqClcpgKqRQy6VQUhlKsSoFJtgoQggIZsFSziEdOchKlcQUShpJKlQwepa9PuhuXYh67FfUFImCqIJLlIjFgSgDrPW2bTbQ3uoAjd47QltraFOFzmrOZYWXSBHGDAODFRRMCFoSiRGXS560Po7Z-uXGlsb6rgU3EZ1-8c1K12GnMcaUcCSTwrtRoQ2_exs7vWmisS71zYZ-f3EsFaYIJ_TNP-jD5Y1UDamCYUopsRlE9bksOFdMsuHi8weo9JR205g04qpJ_knA-0lAYjp709XQx6gX37_9P3t1PWXfHrErC65bxeD64b-KU5AdQNOGGFtb3XcZIz0sx1039LAcelyOFPbqeEL3QXfbQP8CXLcI6g</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3069289570</pqid></control><display><type>article</type><title>Visual scanning patterns of a talking face when evaluating phonetic information in a native and non-native language</title><source>Public Library of Science (PLoS) Journals Open Access</source><source>MEDLINE</source><source>DOAJ Directory of Open Access Journals</source><source>EZB-FREE-00999 freely available EZB journals</source><source>PubMed Central</source><source>Free Full-Text Journals in Chemistry</source><creator>Deng, Xizi ; McClay, Elise ; Jastrzebski, Erin ; Wang, Yue ; Yeung, H Henny</creator><contributor>Coutrot, Antoine</contributor><creatorcontrib>Deng, Xizi ; McClay, Elise ; Jastrzebski, Erin ; Wang, Yue ; Yeung, H Henny ; Coutrot, Antoine</creatorcontrib><description>When comprehending speech, listeners can use information encoded in visual cues from a face to enhance auditory speech comprehension. For example, prior work has shown that the mouth movements reflect articulatory features of speech segments and durational information, while pitch and speech amplitude are primarily cued by eyebrow and head movements. Little is known about how the visual perception of segmental and prosodic speech information is influenced by linguistic experience. Using eye-tracking, we studied how perceivers' visual scanning of different regions on a talking face predicts accuracy in a task targeting both segmental versus prosodic information, and also asked how this was influenced by language familiarity. Twenty-four native English perceivers heard two audio sentences in either English or Mandarin (an unfamiliar, non-native language), which sometimes differed in segmental or prosodic information (or both). Perceivers then saw a silent video of a talking face, and judged whether that video matched either the first or second audio sentence (or whether both sentences were the same). First, increased looking to the mouth predicted correct responses only for non-native language trials. Second, the start of a successful search for speech information in the mouth area was significantly delayed in non-native versus native trials, but just when there were only prosodic differences in the auditory sentences, and not when there were segmental differences. Third, (in correct trials) the saccade amplitude in native language trials was significantly greater than in non-native trials, indicating more intensely focused fixations in the latter. Taken together, these results suggest that mouth-looking was generally more evident when processing a non-native versus native language in all analyses, but fascinatingly, when measuring perceivers' latency to fixate the mouth, this language effect was largest in trials where only prosodic information was useful for the task.</description><identifier>ISSN: 1932-6203</identifier><identifier>EISSN: 1932-6203</identifier><identifier>DOI: 10.1371/journal.pone.0304150</identifier><identifier>PMID: 38805447</identifier><language>eng</language><publisher>United States: Public Library of Science</publisher><subject>Adult ; Amplitude (Acoustics) ; Amplitudes ; Analysis ; Articulatory phonetics ; Biology and Life Sciences ; English language ; Eye movements ; Eye Movements - physiology ; Eye-Tracking Technology ; Face ; Face - physiology ; Familiarity ; Female ; Frequency ; Head movement ; Humans ; Language ; Latency ; Linguistics ; Listening comprehension ; Male ; Mandarin ; Medicine and Health Sciences ; Mouth ; Native language ; Native languages ; Phonetics ; Pitch ; Prosody ; Saccadic eye movements ; Scanning ; Sentences ; Shakespeare plays ; Social Sciences ; Speech ; Speech - physiology ; Speech perception ; Speech Perception - physiology ; Talking ; Visual perception ; Visual Perception - physiology ; Visual stimuli ; Young Adult</subject><ispartof>PloS one, 2024-05, Vol.19 (5), p.e0304150-e0304150</ispartof><rights>Copyright: © 2024 Deng et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.</rights><rights>COPYRIGHT 2024 Public Library of Science</rights><rights>2024 Deng et al. This is an open access article distributed under the terms of the Creative Commons Attribution License: http://creativecommons.org/licenses/by/4.0/ (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>2024 Deng et al 2024 Deng et al</rights><rights>2024 Deng et al. This is an open access article distributed under the terms of the Creative Commons Attribution License: http://creativecommons.org/licenses/by/4.0/ (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c572t-994d8c136768bb6878c3318d6121c8222a64eae355ac825a7f8d2c987c72f0943</cites><orcidid>0000-0003-3288-574X ; 0000-0003-4407-8347</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC11132507/pdf/$$EPDF$$P50$$Gpubmedcentral$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC11132507/$$EHTML$$P50$$Gpubmedcentral$$Hfree_for_read</linktohtml><link.rule.ids>230,314,723,776,780,860,881,2096,2915,23845,27901,27902,53766,53768,79343,79344</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/38805447$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><contributor>Coutrot, Antoine</contributor><creatorcontrib>Deng, Xizi</creatorcontrib><creatorcontrib>McClay, Elise</creatorcontrib><creatorcontrib>Jastrzebski, Erin</creatorcontrib><creatorcontrib>Wang, Yue</creatorcontrib><creatorcontrib>Yeung, H Henny</creatorcontrib><title>Visual scanning patterns of a talking face when evaluating phonetic information in a native and non-native language</title><title>PloS one</title><addtitle>PLoS One</addtitle><description>When comprehending speech, listeners can use information encoded in visual cues from a face to enhance auditory speech comprehension. For example, prior work has shown that the mouth movements reflect articulatory features of speech segments and durational information, while pitch and speech amplitude are primarily cued by eyebrow and head movements. Little is known about how the visual perception of segmental and prosodic speech information is influenced by linguistic experience. Using eye-tracking, we studied how perceivers' visual scanning of different regions on a talking face predicts accuracy in a task targeting both segmental versus prosodic information, and also asked how this was influenced by language familiarity. Twenty-four native English perceivers heard two audio sentences in either English or Mandarin (an unfamiliar, non-native language), which sometimes differed in segmental or prosodic information (or both). Perceivers then saw a silent video of a talking face, and judged whether that video matched either the first or second audio sentence (or whether both sentences were the same). First, increased looking to the mouth predicted correct responses only for non-native language trials. Second, the start of a successful search for speech information in the mouth area was significantly delayed in non-native versus native trials, but just when there were only prosodic differences in the auditory sentences, and not when there were segmental differences. Third, (in correct trials) the saccade amplitude in native language trials was significantly greater than in non-native trials, indicating more intensely focused fixations in the latter. Taken together, these results suggest that mouth-looking was generally more evident when processing a non-native versus native language in all analyses, but fascinatingly, when measuring perceivers' latency to fixate the mouth, this language effect was largest in trials where only prosodic information was useful for the task.</description><subject>Adult</subject><subject>Amplitude (Acoustics)</subject><subject>Amplitudes</subject><subject>Analysis</subject><subject>Articulatory phonetics</subject><subject>Biology and Life Sciences</subject><subject>English language</subject><subject>Eye movements</subject><subject>Eye Movements - physiology</subject><subject>Eye-Tracking Technology</subject><subject>Face</subject><subject>Face - physiology</subject><subject>Familiarity</subject><subject>Female</subject><subject>Frequency</subject><subject>Head movement</subject><subject>Humans</subject><subject>Language</subject><subject>Latency</subject><subject>Linguistics</subject><subject>Listening comprehension</subject><subject>Male</subject><subject>Mandarin</subject><subject>Medicine and Health Sciences</subject><subject>Mouth</subject><subject>Native language</subject><subject>Native languages</subject><subject>Phonetics</subject><subject>Pitch</subject><subject>Prosody</subject><subject>Saccadic eye movements</subject><subject>Scanning</subject><subject>Sentences</subject><subject>Shakespeare plays</subject><subject>Social Sciences</subject><subject>Speech</subject><subject>Speech - physiology</subject><subject>Speech perception</subject><subject>Speech Perception - physiology</subject><subject>Talking</subject><subject>Visual perception</subject><subject>Visual Perception - physiology</subject><subject>Visual stimuli</subject><subject>Young Adult</subject><issn>1932-6203</issn><issn>1932-6203</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><sourceid>BENPR</sourceid><sourceid>DOA</sourceid><recordid>eNqNk0tv1DAQxyMEoqXwDRBEQkJw2MVvOydUVTxWqlSJR6_WrONks3jtJU6W9tvj7KbVBvWAcsh48pv_eGYyWfYSozmmEn9Yh7714Obb4O0cUcQwR4-yU1xQMhME0cdH9kn2LMY1QpwqIZ5mJ1QpxBmTp1m8bmIPLo8GvG98nW-h62zrYx6qHPIO3K_BW4Gx-Z-V9bndgeuh26OrlLprTN74KrSb5Aw-2SnMJ3tnc_Bl7oOfjUcHvu6hts-zJxW4aF-M77Ps5-dPPy6-zi6vviwuzi9nhkvSzYqClcpgKqRQy6VQUhlKsSoFJtgoQggIZsFSziEdOchKlcQUShpJKlQwepa9PuhuXYh67FfUFImCqIJLlIjFgSgDrPW2bTbQ3uoAjd47QltraFOFzmrOZYWXSBHGDAODFRRMCFoSiRGXS560Po7Z-uXGlsb6rgU3EZ1-8c1K12GnMcaUcCSTwrtRoQ2_exs7vWmisS71zYZ-f3EsFaYIJ_TNP-jD5Y1UDamCYUopsRlE9bksOFdMsuHi8weo9JR205g04qpJ_knA-0lAYjp709XQx6gX37_9P3t1PWXfHrErC65bxeD64b-KU5AdQNOGGFtb3XcZIz0sx1039LAcelyOFPbqeEL3QXfbQP8CXLcI6g</recordid><startdate>20240528</startdate><enddate>20240528</enddate><creator>Deng, Xizi</creator><creator>McClay, Elise</creator><creator>Jastrzebski, Erin</creator><creator>Wang, Yue</creator><creator>Yeung, H Henny</creator><general>Public Library of Science</general><general>Public Library of Science (PLoS)</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>IOV</scope><scope>ISR</scope><scope>3V.</scope><scope>7QG</scope><scope>7QL</scope><scope>7QO</scope><scope>7RV</scope><scope>7SN</scope><scope>7SS</scope><scope>7T5</scope><scope>7T9</scope><scope>7TG</scope><scope>7TM</scope><scope>7U9</scope><scope>7X2</scope><scope>7X7</scope><scope>7XB</scope><scope>88E</scope><scope>8AO</scope><scope>8C1</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FH</scope><scope>8FI</scope><scope>8FJ</scope><scope>8FK</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AEUYN</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>ATCPS</scope><scope>AZQEC</scope><scope>BBNVY</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>BHPHI</scope><scope>C1K</scope><scope>CCPQU</scope><scope>D1I</scope><scope>DWQXO</scope><scope>FR3</scope><scope>FYUFA</scope><scope>GHDGH</scope><scope>GNUQQ</scope><scope>H94</scope><scope>HCIFZ</scope><scope>K9.</scope><scope>KB.</scope><scope>KB0</scope><scope>KL.</scope><scope>L6V</scope><scope>LK8</scope><scope>M0K</scope><scope>M0S</scope><scope>M1P</scope><scope>M7N</scope><scope>M7P</scope><scope>M7S</scope><scope>NAPCQ</scope><scope>P5Z</scope><scope>P62</scope><scope>P64</scope><scope>PATMY</scope><scope>PDBOC</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>PYCSY</scope><scope>RC3</scope><scope>7X8</scope><scope>5PM</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0003-3288-574X</orcidid><orcidid>https://orcid.org/0000-0003-4407-8347</orcidid></search><sort><creationdate>20240528</creationdate><title>Visual scanning patterns of a talking face when evaluating phonetic information in a native and non-native language</title><author>Deng, Xizi ; McClay, Elise ; Jastrzebski, Erin ; Wang, Yue ; Yeung, H Henny</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c572t-994d8c136768bb6878c3318d6121c8222a64eae355ac825a7f8d2c987c72f0943</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Adult</topic><topic>Amplitude (Acoustics)</topic><topic>Amplitudes</topic><topic>Analysis</topic><topic>Articulatory phonetics</topic><topic>Biology and Life Sciences</topic><topic>English language</topic><topic>Eye movements</topic><topic>Eye Movements - physiology</topic><topic>Eye-Tracking Technology</topic><topic>Face</topic><topic>Face - physiology</topic><topic>Familiarity</topic><topic>Female</topic><topic>Frequency</topic><topic>Head movement</topic><topic>Humans</topic><topic>Language</topic><topic>Latency</topic><topic>Linguistics</topic><topic>Listening comprehension</topic><topic>Male</topic><topic>Mandarin</topic><topic>Medicine and Health Sciences</topic><topic>Mouth</topic><topic>Native language</topic><topic>Native languages</topic><topic>Phonetics</topic><topic>Pitch</topic><topic>Prosody</topic><topic>Saccadic eye movements</topic><topic>Scanning</topic><topic>Sentences</topic><topic>Shakespeare plays</topic><topic>Social Sciences</topic><topic>Speech</topic><topic>Speech - physiology</topic><topic>Speech perception</topic><topic>Speech Perception - physiology</topic><topic>Talking</topic><topic>Visual perception</topic><topic>Visual Perception - physiology</topic><topic>Visual stimuli</topic><topic>Young Adult</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Deng, Xizi</creatorcontrib><creatorcontrib>McClay, Elise</creatorcontrib><creatorcontrib>Jastrzebski, Erin</creatorcontrib><creatorcontrib>Wang, Yue</creatorcontrib><creatorcontrib>Yeung, H Henny</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Opposing Viewpoints in Context (Gale)</collection><collection>Gale In Context: Science</collection><collection>ProQuest Central (Corporate)</collection><collection>Animal Behavior Abstracts</collection><collection>Bacteriology Abstracts (Microbiology B)</collection><collection>Biotechnology Research Abstracts</collection><collection>Nursing &amp; Allied Health Database</collection><collection>Ecology Abstracts</collection><collection>Entomology Abstracts (Full archive)</collection><collection>Immunology Abstracts</collection><collection>Linguistics and Language Behavior Abstracts (LLBA)</collection><collection>Meteorological &amp; Geoastrophysical Abstracts</collection><collection>Nucleic Acids Abstracts</collection><collection>Virology and AIDS Abstracts</collection><collection>Agricultural Science Collection</collection><collection>Health &amp; Medical Collection</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Medical Database (Alumni Edition)</collection><collection>ProQuest Pharma Collection</collection><collection>Public Health Database</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Natural Science Collection</collection><collection>Hospital Premium Collection</collection><collection>Hospital Premium Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest One Sustainability</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>Agricultural &amp; Environmental Science Collection</collection><collection>ProQuest Central Essentials</collection><collection>Biological Science Collection</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>Natural Science Collection</collection><collection>Environmental Sciences and Pollution Management</collection><collection>ProQuest One Community College</collection><collection>ProQuest Materials Science Collection</collection><collection>ProQuest Central Korea</collection><collection>Engineering Research Database</collection><collection>Health Research Premium Collection</collection><collection>Health Research Premium Collection (Alumni)</collection><collection>ProQuest Central Student</collection><collection>AIDS and Cancer Research Abstracts</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Health &amp; Medical Complete (Alumni)</collection><collection>Materials Science Database</collection><collection>Nursing &amp; Allied Health Database (Alumni Edition)</collection><collection>Meteorological &amp; Geoastrophysical Abstracts - Academic</collection><collection>ProQuest Engineering Collection</collection><collection>ProQuest Biological Science Collection</collection><collection>Agricultural Science Database</collection><collection>Health &amp; Medical Collection (Alumni Edition)</collection><collection>Medical Database</collection><collection>Algology Mycology and Protozoology Abstracts (Microbiology C)</collection><collection>Biological Science Database</collection><collection>Engineering Database</collection><collection>Nursing &amp; Allied Health Premium</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>Environmental Science Database</collection><collection>Materials Science Collection</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>Environmental Science Collection</collection><collection>Genetics Abstracts</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>PloS one</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Deng, Xizi</au><au>McClay, Elise</au><au>Jastrzebski, Erin</au><au>Wang, Yue</au><au>Yeung, H Henny</au><au>Coutrot, Antoine</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Visual scanning patterns of a talking face when evaluating phonetic information in a native and non-native language</atitle><jtitle>PloS one</jtitle><addtitle>PLoS One</addtitle><date>2024-05-28</date><risdate>2024</risdate><volume>19</volume><issue>5</issue><spage>e0304150</spage><epage>e0304150</epage><pages>e0304150-e0304150</pages><issn>1932-6203</issn><eissn>1932-6203</eissn><abstract>When comprehending speech, listeners can use information encoded in visual cues from a face to enhance auditory speech comprehension. For example, prior work has shown that the mouth movements reflect articulatory features of speech segments and durational information, while pitch and speech amplitude are primarily cued by eyebrow and head movements. Little is known about how the visual perception of segmental and prosodic speech information is influenced by linguistic experience. Using eye-tracking, we studied how perceivers' visual scanning of different regions on a talking face predicts accuracy in a task targeting both segmental versus prosodic information, and also asked how this was influenced by language familiarity. Twenty-four native English perceivers heard two audio sentences in either English or Mandarin (an unfamiliar, non-native language), which sometimes differed in segmental or prosodic information (or both). Perceivers then saw a silent video of a talking face, and judged whether that video matched either the first or second audio sentence (or whether both sentences were the same). First, increased looking to the mouth predicted correct responses only for non-native language trials. Second, the start of a successful search for speech information in the mouth area was significantly delayed in non-native versus native trials, but just when there were only prosodic differences in the auditory sentences, and not when there were segmental differences. Third, (in correct trials) the saccade amplitude in native language trials was significantly greater than in non-native trials, indicating more intensely focused fixations in the latter. Taken together, these results suggest that mouth-looking was generally more evident when processing a non-native versus native language in all analyses, but fascinatingly, when measuring perceivers' latency to fixate the mouth, this language effect was largest in trials where only prosodic information was useful for the task.</abstract><cop>United States</cop><pub>Public Library of Science</pub><pmid>38805447</pmid><doi>10.1371/journal.pone.0304150</doi><tpages>e0304150</tpages><orcidid>https://orcid.org/0000-0003-3288-574X</orcidid><orcidid>https://orcid.org/0000-0003-4407-8347</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1932-6203
ispartof PloS one, 2024-05, Vol.19 (5), p.e0304150-e0304150
issn 1932-6203
1932-6203
language eng
recordid cdi_plos_journals_3069289570
source Public Library of Science (PLoS) Journals Open Access; MEDLINE; DOAJ Directory of Open Access Journals; EZB-FREE-00999 freely available EZB journals; PubMed Central; Free Full-Text Journals in Chemistry
subjects Adult
Amplitude (Acoustics)
Amplitudes
Analysis
Articulatory phonetics
Biology and Life Sciences
English language
Eye movements
Eye Movements - physiology
Eye-Tracking Technology
Face
Face - physiology
Familiarity
Female
Frequency
Head movement
Humans
Language
Latency
Linguistics
Listening comprehension
Male
Mandarin
Medicine and Health Sciences
Mouth
Native language
Native languages
Phonetics
Pitch
Prosody
Saccadic eye movements
Scanning
Sentences
Shakespeare plays
Social Sciences
Speech
Speech - physiology
Speech perception
Speech Perception - physiology
Talking
Visual perception
Visual Perception - physiology
Visual stimuli
Young Adult
title Visual scanning patterns of a talking face when evaluating phonetic information in a native and non-native language
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-29T00%3A05%3A59IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_plos_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Visual%20scanning%20patterns%20of%20a%20talking%20face%20when%20evaluating%20phonetic%20information%20in%20a%20native%20and%20non-native%20language&rft.jtitle=PloS%20one&rft.au=Deng,%20Xizi&rft.date=2024-05-28&rft.volume=19&rft.issue=5&rft.spage=e0304150&rft.epage=e0304150&rft.pages=e0304150-e0304150&rft.issn=1932-6203&rft.eissn=1932-6203&rft_id=info:doi/10.1371/journal.pone.0304150&rft_dat=%3Cgale_plos_%3EA795584745%3C/gale_plos_%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3069289570&rft_id=info:pmid/38805447&rft_galeid=A795584745&rft_doaj_id=oai_doaj_org_article_557f1b08244c4ac18a94663d271057b5&rfr_iscdi=true