Gaze-based predictive models of deep reading comprehension

Eye gaze patterns can reveal user attention, reading fluency, corrective responding, and other reading processes, suggesting they can be used to develop automated, real-time assessments of comprehension. However, past work has focused on modeling factual comprehension, whereas we ask whether gaze pa...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:User modeling and user-adapted interaction 2023-07, Vol.33 (3), p.687-725
Hauptverfasser: Southwell, Rosy, Mills, Caitlin, Caruso, Megan, D’Mello, Sidney K.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 725
container_issue 3
container_start_page 687
container_title User modeling and user-adapted interaction
container_volume 33
creator Southwell, Rosy
Mills, Caitlin
Caruso, Megan
D’Mello, Sidney K.
description Eye gaze patterns can reveal user attention, reading fluency, corrective responding, and other reading processes, suggesting they can be used to develop automated, real-time assessments of comprehension. However, past work has focused on modeling factual comprehension, whereas we ask whether gaze patterns reflect deeper levels of comprehension where inferencing and elaboration are key. We trained linear regression and random forest models to predict the quality of users’ open-ended self-explanations (SEs) collected both during and after reading and scored on a continuous scale by human raters. Our models use theoretically grounded eye tracking features (number and duration of fixations, saccade distance, proportion of regressive and horizontal saccades, spatial dispersion of fixations, and reading time) captured from a remote, head-free eye tracker (Tobii TX300) as adult users read a long expository text (6500 words) in two studies ( N  = 106 and 131; 247 total). Our models: (1) demonstrated convergence with human-scored SEs ( r  = .322 and .354), by capturing both within-user and between-user differences in comprehension; (2) were distinct from alternate models of mind-wandering and shallow comprehension; (3) predicted multiple-choice posttests of inference-level comprehension ( r  = .288, .354) measured immediately after reading and after a week-long delay beyond the comparison models; and (4) generalized across new users and datasets. Such models could be embedded in digital reading interfaces to improve comprehension outcomes by delivering interventions based on users’ level of comprehension.
doi_str_mv 10.1007/s11257-022-09346-7
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2813457538</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2813457538</sourcerecordid><originalsourceid>FETCH-LOGICAL-c319t-73f821dec78bb1816f36b211094527860b4324d9f4ed802533758d1962e61e163</originalsourceid><addsrcrecordid>eNp9kEtLAzEURoMoWB9_wNWA62hu3nEnRatQcKPrMDO5U6e0kzFpBf31Rkdw5-puzvkuHEIugF0BY-Y6A3BlKOOcMiekpuaAzEAZQUE4OCQz5rikYLU9Jic5r1mRtHEzcrOoP5E2dcZQjQlD3-76d6y2MeAmV7GrAuJYJaxDP6yqNm4L9IpD7uNwRo66epPx_Peekpf7u-f5A10-LR7nt0vaCnA7akRnOQRsjW0asKA7oRsOwJxU3FjNGim4DK6TGCzjSgijbACnOWpA0OKUXE67Y4pve8w7v477NJSXnlsQUhklbKH4RLUp5pyw82Pqt3X68MD8dyM_NfKlkf9p5E2RxCTlAg8rTH_T_1hf7NBnbg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2813457538</pqid></control><display><type>article</type><title>Gaze-based predictive models of deep reading comprehension</title><source>EBSCOhost Business Source Complete</source><source>SpringerLink Journals - AutoHoldings</source><creator>Southwell, Rosy ; Mills, Caitlin ; Caruso, Megan ; D’Mello, Sidney K.</creator><creatorcontrib>Southwell, Rosy ; Mills, Caitlin ; Caruso, Megan ; D’Mello, Sidney K.</creatorcontrib><description>Eye gaze patterns can reveal user attention, reading fluency, corrective responding, and other reading processes, suggesting they can be used to develop automated, real-time assessments of comprehension. However, past work has focused on modeling factual comprehension, whereas we ask whether gaze patterns reflect deeper levels of comprehension where inferencing and elaboration are key. We trained linear regression and random forest models to predict the quality of users’ open-ended self-explanations (SEs) collected both during and after reading and scored on a continuous scale by human raters. Our models use theoretically grounded eye tracking features (number and duration of fixations, saccade distance, proportion of regressive and horizontal saccades, spatial dispersion of fixations, and reading time) captured from a remote, head-free eye tracker (Tobii TX300) as adult users read a long expository text (6500 words) in two studies ( N  = 106 and 131; 247 total). Our models: (1) demonstrated convergence with human-scored SEs ( r  = .322 and .354), by capturing both within-user and between-user differences in comprehension; (2) were distinct from alternate models of mind-wandering and shallow comprehension; (3) predicted multiple-choice posttests of inference-level comprehension ( r  = .288, .354) measured immediately after reading and after a week-long delay beyond the comparison models; and (4) generalized across new users and datasets. Such models could be embedded in digital reading interfaces to improve comprehension outcomes by delivering interventions based on users’ level of comprehension.</description><identifier>ISSN: 0924-1868</identifier><identifier>EISSN: 1573-1391</identifier><identifier>DOI: 10.1007/s11257-022-09346-7</identifier><language>eng</language><publisher>Dordrecht: Springer Netherlands</publisher><subject>Computer Science ; Eye movements ; Management of Computing and Information Systems ; Multimedia Information Systems ; Prediction models ; Reading comprehension ; User Interfaces and Human Computer Interaction</subject><ispartof>User modeling and user-adapted interaction, 2023-07, Vol.33 (3), p.687-725</ispartof><rights>The Author(s), under exclusive licence to Springer Nature B.V. 2022. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c319t-73f821dec78bb1816f36b211094527860b4324d9f4ed802533758d1962e61e163</citedby><cites>FETCH-LOGICAL-c319t-73f821dec78bb1816f36b211094527860b4324d9f4ed802533758d1962e61e163</cites><orcidid>0000-0003-4141-523X</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11257-022-09346-7$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11257-022-09346-7$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,41488,42557,51319</link.rule.ids></links><search><creatorcontrib>Southwell, Rosy</creatorcontrib><creatorcontrib>Mills, Caitlin</creatorcontrib><creatorcontrib>Caruso, Megan</creatorcontrib><creatorcontrib>D’Mello, Sidney K.</creatorcontrib><title>Gaze-based predictive models of deep reading comprehension</title><title>User modeling and user-adapted interaction</title><addtitle>User Model User-Adap Inter</addtitle><description>Eye gaze patterns can reveal user attention, reading fluency, corrective responding, and other reading processes, suggesting they can be used to develop automated, real-time assessments of comprehension. However, past work has focused on modeling factual comprehension, whereas we ask whether gaze patterns reflect deeper levels of comprehension where inferencing and elaboration are key. We trained linear regression and random forest models to predict the quality of users’ open-ended self-explanations (SEs) collected both during and after reading and scored on a continuous scale by human raters. Our models use theoretically grounded eye tracking features (number and duration of fixations, saccade distance, proportion of regressive and horizontal saccades, spatial dispersion of fixations, and reading time) captured from a remote, head-free eye tracker (Tobii TX300) as adult users read a long expository text (6500 words) in two studies ( N  = 106 and 131; 247 total). Our models: (1) demonstrated convergence with human-scored SEs ( r  = .322 and .354), by capturing both within-user and between-user differences in comprehension; (2) were distinct from alternate models of mind-wandering and shallow comprehension; (3) predicted multiple-choice posttests of inference-level comprehension ( r  = .288, .354) measured immediately after reading and after a week-long delay beyond the comparison models; and (4) generalized across new users and datasets. Such models could be embedded in digital reading interfaces to improve comprehension outcomes by delivering interventions based on users’ level of comprehension.</description><subject>Computer Science</subject><subject>Eye movements</subject><subject>Management of Computing and Information Systems</subject><subject>Multimedia Information Systems</subject><subject>Prediction models</subject><subject>Reading comprehension</subject><subject>User Interfaces and Human Computer Interaction</subject><issn>0924-1868</issn><issn>1573-1391</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><recordid>eNp9kEtLAzEURoMoWB9_wNWA62hu3nEnRatQcKPrMDO5U6e0kzFpBf31Rkdw5-puzvkuHEIugF0BY-Y6A3BlKOOcMiekpuaAzEAZQUE4OCQz5rikYLU9Jic5r1mRtHEzcrOoP5E2dcZQjQlD3-76d6y2MeAmV7GrAuJYJaxDP6yqNm4L9IpD7uNwRo66epPx_Peekpf7u-f5A10-LR7nt0vaCnA7akRnOQRsjW0asKA7oRsOwJxU3FjNGim4DK6TGCzjSgijbACnOWpA0OKUXE67Y4pve8w7v477NJSXnlsQUhklbKH4RLUp5pyw82Pqt3X68MD8dyM_NfKlkf9p5E2RxCTlAg8rTH_T_1hf7NBnbg</recordid><startdate>20230701</startdate><enddate>20230701</enddate><creator>Southwell, Rosy</creator><creator>Mills, Caitlin</creator><creator>Caruso, Megan</creator><creator>D’Mello, Sidney K.</creator><general>Springer Netherlands</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>88G</scope><scope>8AL</scope><scope>8AO</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FI</scope><scope>8FJ</scope><scope>8FK</scope><scope>8FL</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>FYUFA</scope><scope>F~G</scope><scope>GHDGH</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>M2M</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PSYQQ</scope><scope>Q9U</scope><orcidid>https://orcid.org/0000-0003-4141-523X</orcidid></search><sort><creationdate>20230701</creationdate><title>Gaze-based predictive models of deep reading comprehension</title><author>Southwell, Rosy ; Mills, Caitlin ; Caruso, Megan ; D’Mello, Sidney K.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c319t-73f821dec78bb1816f36b211094527860b4324d9f4ed802533758d1962e61e163</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science</topic><topic>Eye movements</topic><topic>Management of Computing and Information Systems</topic><topic>Multimedia Information Systems</topic><topic>Prediction models</topic><topic>Reading comprehension</topic><topic>User Interfaces and Human Computer Interaction</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Southwell, Rosy</creatorcontrib><creatorcontrib>Mills, Caitlin</creatorcontrib><creatorcontrib>Caruso, Megan</creatorcontrib><creatorcontrib>D’Mello, Sidney K.</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>Access via ABI/INFORM (ProQuest)</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Psychology Database (Alumni)</collection><collection>Computing Database (Alumni Edition)</collection><collection>ProQuest Pharma Collection</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Hospital Premium Collection</collection><collection>Hospital Premium Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>Health Research Premium Collection</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>Health Research Premium Collection (Alumni)</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>Psychology Database</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ProQuest One Psychology</collection><collection>ProQuest Central Basic</collection><jtitle>User modeling and user-adapted interaction</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Southwell, Rosy</au><au>Mills, Caitlin</au><au>Caruso, Megan</au><au>D’Mello, Sidney K.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Gaze-based predictive models of deep reading comprehension</atitle><jtitle>User modeling and user-adapted interaction</jtitle><stitle>User Model User-Adap Inter</stitle><date>2023-07-01</date><risdate>2023</risdate><volume>33</volume><issue>3</issue><spage>687</spage><epage>725</epage><pages>687-725</pages><issn>0924-1868</issn><eissn>1573-1391</eissn><abstract>Eye gaze patterns can reveal user attention, reading fluency, corrective responding, and other reading processes, suggesting they can be used to develop automated, real-time assessments of comprehension. However, past work has focused on modeling factual comprehension, whereas we ask whether gaze patterns reflect deeper levels of comprehension where inferencing and elaboration are key. We trained linear regression and random forest models to predict the quality of users’ open-ended self-explanations (SEs) collected both during and after reading and scored on a continuous scale by human raters. Our models use theoretically grounded eye tracking features (number and duration of fixations, saccade distance, proportion of regressive and horizontal saccades, spatial dispersion of fixations, and reading time) captured from a remote, head-free eye tracker (Tobii TX300) as adult users read a long expository text (6500 words) in two studies ( N  = 106 and 131; 247 total). Our models: (1) demonstrated convergence with human-scored SEs ( r  = .322 and .354), by capturing both within-user and between-user differences in comprehension; (2) were distinct from alternate models of mind-wandering and shallow comprehension; (3) predicted multiple-choice posttests of inference-level comprehension ( r  = .288, .354) measured immediately after reading and after a week-long delay beyond the comparison models; and (4) generalized across new users and datasets. Such models could be embedded in digital reading interfaces to improve comprehension outcomes by delivering interventions based on users’ level of comprehension.</abstract><cop>Dordrecht</cop><pub>Springer Netherlands</pub><doi>10.1007/s11257-022-09346-7</doi><tpages>39</tpages><orcidid>https://orcid.org/0000-0003-4141-523X</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0924-1868
ispartof User modeling and user-adapted interaction, 2023-07, Vol.33 (3), p.687-725
issn 0924-1868
1573-1391
language eng
recordid cdi_proquest_journals_2813457538
source EBSCOhost Business Source Complete; SpringerLink Journals - AutoHoldings
subjects Computer Science
Eye movements
Management of Computing and Information Systems
Multimedia Information Systems
Prediction models
Reading comprehension
User Interfaces and Human Computer Interaction
title Gaze-based predictive models of deep reading comprehension
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-20T10%3A15%3A48IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Gaze-based%20predictive%20models%20of%20deep%20reading%20comprehension&rft.jtitle=User%20modeling%20and%20user-adapted%20interaction&rft.au=Southwell,%20Rosy&rft.date=2023-07-01&rft.volume=33&rft.issue=3&rft.spage=687&rft.epage=725&rft.pages=687-725&rft.issn=0924-1868&rft.eissn=1573-1391&rft_id=info:doi/10.1007/s11257-022-09346-7&rft_dat=%3Cproquest_cross%3E2813457538%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2813457538&rft_id=info:pmid/&rfr_iscdi=true