Children Flexibly Seek Visual Information to Support Signed and Spoken Language Comprehension

During grounded language comprehension, listeners must link the incoming linguistic signal to the visual world despite uncertainty in the input. Information gathered through visual fixations can facilitate understanding. But do listeners flexibly seek supportive visual information? Here, we propose...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of experimental psychology. General 2020-06, Vol.149 (6), p.1078-1096
Hauptverfasser: MacDonald, Kyle, Marchman, Virginia A., Fernald, Anne, Frank, Michael C.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 1096
container_issue 6
container_start_page 1078
container_title Journal of experimental psychology. General
container_volume 149
creator MacDonald, Kyle
Marchman, Virginia A.
Fernald, Anne
Frank, Michael C.
description During grounded language comprehension, listeners must link the incoming linguistic signal to the visual world despite uncertainty in the input. Information gathered through visual fixations can facilitate understanding. But do listeners flexibly seek supportive visual information? Here, we propose that even young children can adapt their gaze and actively gather information for the goal of language comprehension. We present 2 studies of eye movements during real-time language processing, where the value of fixating on a social partner varies across different contexts. First, compared with children learning spoken English (n = 80), young American Sign Language (ASL) learners (n = 30) delayed gaze shifts away from a language source and produced a higher proportion of language-consistent eye movements. This result provides evidence that ASL learners adapt their gaze to effectively divide attention between language and referents, which both compete for processing via the visual channel. Second, English-speaking preschoolers (n = 39) and adults (n = 31) fixated longer on a speaker's face while processing language in a noisy auditory environment. Critically, like the ASL learners in Experiment 1, this delay resulted in gathering more visual information and a higher proportion of language-consistent gaze shifts. Taken together, these studies suggest that young listeners can adapt their gaze to seek visual information from social partners to support real-time language comprehension.
doi_str_mv 10.1037/xge0000702
format Article
fullrecord <record><control><sourceid>proquest_pubme</sourceid><recordid>TN_cdi_proquest_journals_2316512066</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2316784832</sourcerecordid><originalsourceid>FETCH-LOGICAL-a472t-c5559aa2792266bc72f0511c2a3b581aaf0553768af81b4ab95a46faacc5c3ad3</originalsourceid><addsrcrecordid>eNp9kU2LFDEQhoMo7rh68QdIwIsIrfnoJJ2LsAyuLgx4GPUmoTqd7sluT9Im3bLz780w6_pxsC4h5KmHqrwIPafkDSVcvb0dHCmlCHuAVlRzXbFSD9GKEC0rXtfiDD3J-foI8UY-RmecKkEU5Sv0bb3zY5dcwJeju_XteMBb527wV58XGPFV6GPaw-xjwHPE22WaYprx1g_BdRhCh7dTvCndGwjDAoPD67ifktu5kEvPU_SohzG7Z3fnOfpy-f7z-mO1-fThan2xqaBWbK6sEEIDMKUZk7K1ivVEUGoZ8FY0FKBcBVeygb6hbQ2tFlDLHsBaYTl0_By9O3mnpd27zrowJxjNlPwe0sFE8Obvl-B3Zog_TEMl01oUwas7QYrfF5dns_fZunGE4OKSDeNUqqZuOCvoy3_Q67ikUNYzrCas_CvR-r9UcQnKiJSFen2ibIo5J9ffj0yJOWZrfmdb4Bd_LnmP_gqzANUJgAnMlA8W0uzt6LJdUkl4PsoMrbWRxa0a_hPBoK9O</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2316512066</pqid></control><display><type>article</type><title>Children Flexibly Seek Visual Information to Support Signed and Spoken Language Comprehension</title><source>APA PsycARTICLES</source><source>MEDLINE</source><creator>MacDonald, Kyle ; Marchman, Virginia A. ; Fernald, Anne ; Frank, Michael C.</creator><contributor>Cowan, Nelson</contributor><creatorcontrib>MacDonald, Kyle ; Marchman, Virginia A. ; Fernald, Anne ; Frank, Michael C. ; Cowan, Nelson</creatorcontrib><description>During grounded language comprehension, listeners must link the incoming linguistic signal to the visual world despite uncertainty in the input. Information gathered through visual fixations can facilitate understanding. But do listeners flexibly seek supportive visual information? Here, we propose that even young children can adapt their gaze and actively gather information for the goal of language comprehension. We present 2 studies of eye movements during real-time language processing, where the value of fixating on a social partner varies across different contexts. First, compared with children learning spoken English (n = 80), young American Sign Language (ASL) learners (n = 30) delayed gaze shifts away from a language source and produced a higher proportion of language-consistent eye movements. This result provides evidence that ASL learners adapt their gaze to effectively divide attention between language and referents, which both compete for processing via the visual channel. Second, English-speaking preschoolers (n = 39) and adults (n = 31) fixated longer on a speaker's face while processing language in a noisy auditory environment. Critically, like the ASL learners in Experiment 1, this delay resulted in gathering more visual information and a higher proportion of language-consistent gaze shifts. Taken together, these studies suggest that young listeners can adapt their gaze to seek visual information from social partners to support real-time language comprehension.</description><identifier>ISSN: 0096-3445</identifier><identifier>EISSN: 1939-2222</identifier><identifier>DOI: 10.1037/xge0000702</identifier><identifier>PMID: 31750713</identifier><language>eng</language><publisher>United States: American Psychological Association</publisher><subject>Attention - physiology ; Audiences ; Child development ; Child psychology ; Child, Preschool ; Childhood Development ; Communication ; Comprehension ; Comprehension - physiology ; Eye Fixation ; Eye Movements ; Eye Movements - physiology ; Female ; Human ; Humans ; Information Seeking ; Language ; Language Development ; Linguistics ; Listening Comprehension ; Male ; Partners ; Sensory Adaptation ; Sensory Integration ; Sign Language ; Speech - physiology ; Test Construction</subject><ispartof>Journal of experimental psychology. General, 2020-06, Vol.149 (6), p.1078-1096</ispartof><rights>2019 American Psychological Association</rights><rights>2019, American Psychological Association</rights><rights>Copyright American Psychological Association Jun 2020</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-a472t-c5559aa2792266bc72f0511c2a3b581aaf0553768af81b4ab95a46faacc5c3ad3</citedby><orcidid>0000-0001-6111-3824</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>230,314,776,780,881,27903,27904</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/31750713$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><contributor>Cowan, Nelson</contributor><creatorcontrib>MacDonald, Kyle</creatorcontrib><creatorcontrib>Marchman, Virginia A.</creatorcontrib><creatorcontrib>Fernald, Anne</creatorcontrib><creatorcontrib>Frank, Michael C.</creatorcontrib><title>Children Flexibly Seek Visual Information to Support Signed and Spoken Language Comprehension</title><title>Journal of experimental psychology. General</title><addtitle>J Exp Psychol Gen</addtitle><description>During grounded language comprehension, listeners must link the incoming linguistic signal to the visual world despite uncertainty in the input. Information gathered through visual fixations can facilitate understanding. But do listeners flexibly seek supportive visual information? Here, we propose that even young children can adapt their gaze and actively gather information for the goal of language comprehension. We present 2 studies of eye movements during real-time language processing, where the value of fixating on a social partner varies across different contexts. First, compared with children learning spoken English (n = 80), young American Sign Language (ASL) learners (n = 30) delayed gaze shifts away from a language source and produced a higher proportion of language-consistent eye movements. This result provides evidence that ASL learners adapt their gaze to effectively divide attention between language and referents, which both compete for processing via the visual channel. Second, English-speaking preschoolers (n = 39) and adults (n = 31) fixated longer on a speaker's face while processing language in a noisy auditory environment. Critically, like the ASL learners in Experiment 1, this delay resulted in gathering more visual information and a higher proportion of language-consistent gaze shifts. Taken together, these studies suggest that young listeners can adapt their gaze to seek visual information from social partners to support real-time language comprehension.</description><subject>Attention - physiology</subject><subject>Audiences</subject><subject>Child development</subject><subject>Child psychology</subject><subject>Child, Preschool</subject><subject>Childhood Development</subject><subject>Communication</subject><subject>Comprehension</subject><subject>Comprehension - physiology</subject><subject>Eye Fixation</subject><subject>Eye Movements</subject><subject>Eye Movements - physiology</subject><subject>Female</subject><subject>Human</subject><subject>Humans</subject><subject>Information Seeking</subject><subject>Language</subject><subject>Language Development</subject><subject>Linguistics</subject><subject>Listening Comprehension</subject><subject>Male</subject><subject>Partners</subject><subject>Sensory Adaptation</subject><subject>Sensory Integration</subject><subject>Sign Language</subject><subject>Speech - physiology</subject><subject>Test Construction</subject><issn>0096-3445</issn><issn>1939-2222</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><recordid>eNp9kU2LFDEQhoMo7rh68QdIwIsIrfnoJJ2LsAyuLgx4GPUmoTqd7sluT9Im3bLz780w6_pxsC4h5KmHqrwIPafkDSVcvb0dHCmlCHuAVlRzXbFSD9GKEC0rXtfiDD3J-foI8UY-RmecKkEU5Sv0bb3zY5dcwJeju_XteMBb527wV58XGPFV6GPaw-xjwHPE22WaYprx1g_BdRhCh7dTvCndGwjDAoPD67ifktu5kEvPU_SohzG7Z3fnOfpy-f7z-mO1-fThan2xqaBWbK6sEEIDMKUZk7K1ivVEUGoZ8FY0FKBcBVeygb6hbQ2tFlDLHsBaYTl0_By9O3mnpd27zrowJxjNlPwe0sFE8Obvl-B3Zog_TEMl01oUwas7QYrfF5dns_fZunGE4OKSDeNUqqZuOCvoy3_Q67ikUNYzrCas_CvR-r9UcQnKiJSFen2ibIo5J9ffj0yJOWZrfmdb4Bd_LnmP_gqzANUJgAnMlA8W0uzt6LJdUkl4PsoMrbWRxa0a_hPBoK9O</recordid><startdate>20200601</startdate><enddate>20200601</enddate><creator>MacDonald, Kyle</creator><creator>Marchman, Virginia A.</creator><creator>Fernald, Anne</creator><creator>Frank, Michael C.</creator><general>American Psychological Association</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7RZ</scope><scope>PSYQQ</scope><scope>7X8</scope><scope>5PM</scope><orcidid>https://orcid.org/0000-0001-6111-3824</orcidid></search><sort><creationdate>20200601</creationdate><title>Children Flexibly Seek Visual Information to Support Signed and Spoken Language Comprehension</title><author>MacDonald, Kyle ; Marchman, Virginia A. ; Fernald, Anne ; Frank, Michael C.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a472t-c5559aa2792266bc72f0511c2a3b581aaf0553768af81b4ab95a46faacc5c3ad3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Attention - physiology</topic><topic>Audiences</topic><topic>Child development</topic><topic>Child psychology</topic><topic>Child, Preschool</topic><topic>Childhood Development</topic><topic>Communication</topic><topic>Comprehension</topic><topic>Comprehension - physiology</topic><topic>Eye Fixation</topic><topic>Eye Movements</topic><topic>Eye Movements - physiology</topic><topic>Female</topic><topic>Human</topic><topic>Humans</topic><topic>Information Seeking</topic><topic>Language</topic><topic>Language Development</topic><topic>Linguistics</topic><topic>Listening Comprehension</topic><topic>Male</topic><topic>Partners</topic><topic>Sensory Adaptation</topic><topic>Sensory Integration</topic><topic>Sign Language</topic><topic>Speech - physiology</topic><topic>Test Construction</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>MacDonald, Kyle</creatorcontrib><creatorcontrib>Marchman, Virginia A.</creatorcontrib><creatorcontrib>Fernald, Anne</creatorcontrib><creatorcontrib>Frank, Michael C.</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>APA PsycArticles®</collection><collection>ProQuest One Psychology</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><jtitle>Journal of experimental psychology. General</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>MacDonald, Kyle</au><au>Marchman, Virginia A.</au><au>Fernald, Anne</au><au>Frank, Michael C.</au><au>Cowan, Nelson</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Children Flexibly Seek Visual Information to Support Signed and Spoken Language Comprehension</atitle><jtitle>Journal of experimental psychology. General</jtitle><addtitle>J Exp Psychol Gen</addtitle><date>2020-06-01</date><risdate>2020</risdate><volume>149</volume><issue>6</issue><spage>1078</spage><epage>1096</epage><pages>1078-1096</pages><issn>0096-3445</issn><eissn>1939-2222</eissn><abstract>During grounded language comprehension, listeners must link the incoming linguistic signal to the visual world despite uncertainty in the input. Information gathered through visual fixations can facilitate understanding. But do listeners flexibly seek supportive visual information? Here, we propose that even young children can adapt their gaze and actively gather information for the goal of language comprehension. We present 2 studies of eye movements during real-time language processing, where the value of fixating on a social partner varies across different contexts. First, compared with children learning spoken English (n = 80), young American Sign Language (ASL) learners (n = 30) delayed gaze shifts away from a language source and produced a higher proportion of language-consistent eye movements. This result provides evidence that ASL learners adapt their gaze to effectively divide attention between language and referents, which both compete for processing via the visual channel. Second, English-speaking preschoolers (n = 39) and adults (n = 31) fixated longer on a speaker's face while processing language in a noisy auditory environment. Critically, like the ASL learners in Experiment 1, this delay resulted in gathering more visual information and a higher proportion of language-consistent gaze shifts. Taken together, these studies suggest that young listeners can adapt their gaze to seek visual information from social partners to support real-time language comprehension.</abstract><cop>United States</cop><pub>American Psychological Association</pub><pmid>31750713</pmid><doi>10.1037/xge0000702</doi><tpages>19</tpages><orcidid>https://orcid.org/0000-0001-6111-3824</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0096-3445
ispartof Journal of experimental psychology. General, 2020-06, Vol.149 (6), p.1078-1096
issn 0096-3445
1939-2222
language eng
recordid cdi_proquest_journals_2316512066
source APA PsycARTICLES; MEDLINE
subjects Attention - physiology
Audiences
Child development
Child psychology
Child, Preschool
Childhood Development
Communication
Comprehension
Comprehension - physiology
Eye Fixation
Eye Movements
Eye Movements - physiology
Female
Human
Humans
Information Seeking
Language
Language Development
Linguistics
Listening Comprehension
Male
Partners
Sensory Adaptation
Sensory Integration
Sign Language
Speech - physiology
Test Construction
title Children Flexibly Seek Visual Information to Support Signed and Spoken Language Comprehension
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-28T02%3A37%3A26IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_pubme&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Children%20Flexibly%20Seek%20Visual%20Information%20to%20Support%20Signed%20and%20Spoken%20Language%20Comprehension&rft.jtitle=Journal%20of%20experimental%20psychology.%20General&rft.au=MacDonald,%20Kyle&rft.date=2020-06-01&rft.volume=149&rft.issue=6&rft.spage=1078&rft.epage=1096&rft.pages=1078-1096&rft.issn=0096-3445&rft.eissn=1939-2222&rft_id=info:doi/10.1037/xge0000702&rft_dat=%3Cproquest_pubme%3E2316784832%3C/proquest_pubme%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2316512066&rft_id=info:pmid/31750713&rfr_iscdi=true