A Dynamical Scan-Path Model for Task-Dependence During Scene Viewing

In real-world scene perception, human observers generate sequences of fixations to move image patches into the high-acuity center of the visual field. Models of visual attention developed over the last 25 years aim to predict two-dimensional probabilities of gaze positions for a given image via sali...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Psychological review 2023-04, Vol.130 (3), p.807-840
Hauptverfasser: Schwetlick, Lisa, Backhaus, Daniel, Engbert, Ralf
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 840
container_issue 3
container_start_page 807
container_title Psychological review
container_volume 130
creator Schwetlick, Lisa
Backhaus, Daniel
Engbert, Ralf
description In real-world scene perception, human observers generate sequences of fixations to move image patches into the high-acuity center of the visual field. Models of visual attention developed over the last 25 years aim to predict two-dimensional probabilities of gaze positions for a given image via saliency maps. Recently, progress has been made on models for the generation of scan paths under the constraints of saliency as well as attentional and oculomotor restrictions. Experimental research demonstrated that task constraints can have a strong impact on viewing behavior. Here, we propose a scan-path model for both fixation positions and fixation durations, which include influences of task instructions and interindividual differences. Based on an eye-movement experiment with four different task conditions, we estimated model parameters for each individual observer and task condition using a fully Bayesian dynamical modeling framework using a joint spatial-temporal likelihood approach with sequential estimation. Resulting parameter values demonstrate that model properties such as the attentional span are adjusted to task requirements. Posterior predictive checks indicate that our dynamical model can reproduce task differences in scan-path statistics across individual observers.
doi_str_mv 10.1037/rev0000379
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_2720929550</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2819686900</sourcerecordid><originalsourceid>FETCH-LOGICAL-a434t-980eee97d8e507f6f11d8b9fa232cbb030a82b003fa89a49bb275b594e34e7df3</originalsourceid><addsrcrecordid>eNp90c9LwzAUB_AgipvTi3-AFLyIWE2atkmOY_UXKApO8RbS9FU7u7QmrbL_3oxNBQ_m8sjjw5eXPIT2CT4lmLIzCx_YH8rEBhoSQUVIYkY20dD3aBiJ5HmAdpybLRERYhsNaEoEZgkdomwcZAuj5pVWdfCglQnvVfca3DYF1EHZ2GCq3FuYQQumAKMhyHpbmRdPwUDwVMGnv-2irVLVDvbWdYQeL86nk6vw5u7yejK-CVVM4y4UHAOAYAWHBLMyLQkpeC5KFdFI5zmmWPEo90OXigsVizyPWJInIgYaAytKOkJHq9zWNu89uE7OK6ehrpWBpncyYhEW_r0J9vTwD501vTV-OhlxIlKeCvy_8lkxT3CcenW8Uto2zlkoZWurubILSbBcbkD-bsDjg3Vkn8-h-KHfX-7ByQqoVsnWLbSyXaVrcLq3Fky3DJOEYkklx4x-AaBBja0</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2720485046</pqid></control><display><type>article</type><title>A Dynamical Scan-Path Model for Task-Dependence During Scene Viewing</title><source>MEDLINE</source><source>APA PsycARTICLES</source><creator>Schwetlick, Lisa ; Backhaus, Daniel ; Engbert, Ralf</creator><contributor>Grigorenko, Elena L</contributor><creatorcontrib>Schwetlick, Lisa ; Backhaus, Daniel ; Engbert, Ralf ; Grigorenko, Elena L</creatorcontrib><description>In real-world scene perception, human observers generate sequences of fixations to move image patches into the high-acuity center of the visual field. Models of visual attention developed over the last 25 years aim to predict two-dimensional probabilities of gaze positions for a given image via saliency maps. Recently, progress has been made on models for the generation of scan paths under the constraints of saliency as well as attentional and oculomotor restrictions. Experimental research demonstrated that task constraints can have a strong impact on viewing behavior. Here, we propose a scan-path model for both fixation positions and fixation durations, which include influences of task instructions and interindividual differences. Based on an eye-movement experiment with four different task conditions, we estimated model parameters for each individual observer and task condition using a fully Bayesian dynamical modeling framework using a joint spatial-temporal likelihood approach with sequential estimation. Resulting parameter values demonstrate that model properties such as the attentional span are adjusted to task requirements. Posterior predictive checks indicate that our dynamical model can reproduce task differences in scan-path statistics across individual observers.</description><identifier>ISSN: 0033-295X</identifier><identifier>EISSN: 1939-1471</identifier><identifier>DOI: 10.1037/rev0000379</identifier><identifier>PMID: 36190753</identifier><language>eng</language><publisher>United States: American Psychological Association</publisher><subject>Adjustment ; Attention Span ; Bayes Theorem ; Bayesian analysis ; Estimation ; Experiments ; Eye fixation ; Eye Movements ; Fixation ; Fixation, Ocular ; Human ; Humans ; Individual Differences ; Likelihood Functions ; Observation ; Probability ; Sequences ; Statistical Probability ; Task Analysis ; Visual attention ; Visual Field ; Visual Perception</subject><ispartof>Psychological review, 2023-04, Vol.130 (3), p.807-840</ispartof><rights>2022 American Psychological Association</rights><rights>2022, American Psychological Association</rights><rights>Copyright American Psychological Association Apr 2023</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-a434t-980eee97d8e507f6f11d8b9fa232cbb030a82b003fa89a49bb275b594e34e7df3</citedby><orcidid>0000-0002-1291-8762 ; 0000-0003-3356-8324 ; 0000-0002-2909-5811</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/36190753$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><contributor>Grigorenko, Elena L</contributor><creatorcontrib>Schwetlick, Lisa</creatorcontrib><creatorcontrib>Backhaus, Daniel</creatorcontrib><creatorcontrib>Engbert, Ralf</creatorcontrib><title>A Dynamical Scan-Path Model for Task-Dependence During Scene Viewing</title><title>Psychological review</title><addtitle>Psychol Rev</addtitle><description>In real-world scene perception, human observers generate sequences of fixations to move image patches into the high-acuity center of the visual field. Models of visual attention developed over the last 25 years aim to predict two-dimensional probabilities of gaze positions for a given image via saliency maps. Recently, progress has been made on models for the generation of scan paths under the constraints of saliency as well as attentional and oculomotor restrictions. Experimental research demonstrated that task constraints can have a strong impact on viewing behavior. Here, we propose a scan-path model for both fixation positions and fixation durations, which include influences of task instructions and interindividual differences. Based on an eye-movement experiment with four different task conditions, we estimated model parameters for each individual observer and task condition using a fully Bayesian dynamical modeling framework using a joint spatial-temporal likelihood approach with sequential estimation. Resulting parameter values demonstrate that model properties such as the attentional span are adjusted to task requirements. Posterior predictive checks indicate that our dynamical model can reproduce task differences in scan-path statistics across individual observers.</description><subject>Adjustment</subject><subject>Attention Span</subject><subject>Bayes Theorem</subject><subject>Bayesian analysis</subject><subject>Estimation</subject><subject>Experiments</subject><subject>Eye fixation</subject><subject>Eye Movements</subject><subject>Fixation</subject><subject>Fixation, Ocular</subject><subject>Human</subject><subject>Humans</subject><subject>Individual Differences</subject><subject>Likelihood Functions</subject><subject>Observation</subject><subject>Probability</subject><subject>Sequences</subject><subject>Statistical Probability</subject><subject>Task Analysis</subject><subject>Visual attention</subject><subject>Visual Field</subject><subject>Visual Perception</subject><issn>0033-295X</issn><issn>1939-1471</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><recordid>eNp90c9LwzAUB_AgipvTi3-AFLyIWE2atkmOY_UXKApO8RbS9FU7u7QmrbL_3oxNBQ_m8sjjw5eXPIT2CT4lmLIzCx_YH8rEBhoSQUVIYkY20dD3aBiJ5HmAdpybLRERYhsNaEoEZgkdomwcZAuj5pVWdfCglQnvVfca3DYF1EHZ2GCq3FuYQQumAKMhyHpbmRdPwUDwVMGnv-2irVLVDvbWdYQeL86nk6vw5u7yejK-CVVM4y4UHAOAYAWHBLMyLQkpeC5KFdFI5zmmWPEo90OXigsVizyPWJInIgYaAytKOkJHq9zWNu89uE7OK6ehrpWBpncyYhEW_r0J9vTwD501vTV-OhlxIlKeCvy_8lkxT3CcenW8Uto2zlkoZWurubILSbBcbkD-bsDjg3Vkn8-h-KHfX-7ByQqoVsnWLbSyXaVrcLq3Fky3DJOEYkklx4x-AaBBja0</recordid><startdate>20230401</startdate><enddate>20230401</enddate><creator>Schwetlick, Lisa</creator><creator>Backhaus, Daniel</creator><creator>Engbert, Ralf</creator><general>American Psychological Association</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7RZ</scope><scope>PSYQQ</scope><scope>8BJ</scope><scope>FQK</scope><scope>JBE</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-1291-8762</orcidid><orcidid>https://orcid.org/0000-0003-3356-8324</orcidid><orcidid>https://orcid.org/0000-0002-2909-5811</orcidid></search><sort><creationdate>20230401</creationdate><title>A Dynamical Scan-Path Model for Task-Dependence During Scene Viewing</title><author>Schwetlick, Lisa ; Backhaus, Daniel ; Engbert, Ralf</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a434t-980eee97d8e507f6f11d8b9fa232cbb030a82b003fa89a49bb275b594e34e7df3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Adjustment</topic><topic>Attention Span</topic><topic>Bayes Theorem</topic><topic>Bayesian analysis</topic><topic>Estimation</topic><topic>Experiments</topic><topic>Eye fixation</topic><topic>Eye Movements</topic><topic>Fixation</topic><topic>Fixation, Ocular</topic><topic>Human</topic><topic>Humans</topic><topic>Individual Differences</topic><topic>Likelihood Functions</topic><topic>Observation</topic><topic>Probability</topic><topic>Sequences</topic><topic>Statistical Probability</topic><topic>Task Analysis</topic><topic>Visual attention</topic><topic>Visual Field</topic><topic>Visual Perception</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Schwetlick, Lisa</creatorcontrib><creatorcontrib>Backhaus, Daniel</creatorcontrib><creatorcontrib>Engbert, Ralf</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>APA PsycArticles®</collection><collection>ProQuest One Psychology</collection><collection>International Bibliography of the Social Sciences (IBSS)</collection><collection>International Bibliography of the Social Sciences</collection><collection>International Bibliography of the Social Sciences</collection><collection>MEDLINE - Academic</collection><jtitle>Psychological review</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Schwetlick, Lisa</au><au>Backhaus, Daniel</au><au>Engbert, Ralf</au><au>Grigorenko, Elena L</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A Dynamical Scan-Path Model for Task-Dependence During Scene Viewing</atitle><jtitle>Psychological review</jtitle><addtitle>Psychol Rev</addtitle><date>2023-04-01</date><risdate>2023</risdate><volume>130</volume><issue>3</issue><spage>807</spage><epage>840</epage><pages>807-840</pages><issn>0033-295X</issn><eissn>1939-1471</eissn><abstract>In real-world scene perception, human observers generate sequences of fixations to move image patches into the high-acuity center of the visual field. Models of visual attention developed over the last 25 years aim to predict two-dimensional probabilities of gaze positions for a given image via saliency maps. Recently, progress has been made on models for the generation of scan paths under the constraints of saliency as well as attentional and oculomotor restrictions. Experimental research demonstrated that task constraints can have a strong impact on viewing behavior. Here, we propose a scan-path model for both fixation positions and fixation durations, which include influences of task instructions and interindividual differences. Based on an eye-movement experiment with four different task conditions, we estimated model parameters for each individual observer and task condition using a fully Bayesian dynamical modeling framework using a joint spatial-temporal likelihood approach with sequential estimation. Resulting parameter values demonstrate that model properties such as the attentional span are adjusted to task requirements. Posterior predictive checks indicate that our dynamical model can reproduce task differences in scan-path statistics across individual observers.</abstract><cop>United States</cop><pub>American Psychological Association</pub><pmid>36190753</pmid><doi>10.1037/rev0000379</doi><tpages>34</tpages><orcidid>https://orcid.org/0000-0002-1291-8762</orcidid><orcidid>https://orcid.org/0000-0003-3356-8324</orcidid><orcidid>https://orcid.org/0000-0002-2909-5811</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0033-295X
ispartof Psychological review, 2023-04, Vol.130 (3), p.807-840
issn 0033-295X
1939-1471
language eng
recordid cdi_proquest_miscellaneous_2720929550
source MEDLINE; APA PsycARTICLES
subjects Adjustment
Attention Span
Bayes Theorem
Bayesian analysis
Estimation
Experiments
Eye fixation
Eye Movements
Fixation
Fixation, Ocular
Human
Humans
Individual Differences
Likelihood Functions
Observation
Probability
Sequences
Statistical Probability
Task Analysis
Visual attention
Visual Field
Visual Perception
title A Dynamical Scan-Path Model for Task-Dependence During Scene Viewing
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-02T22%3A28%3A46IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20Dynamical%20Scan-Path%20Model%20for%20Task-Dependence%20During%20Scene%20Viewing&rft.jtitle=Psychological%20review&rft.au=Schwetlick,%20Lisa&rft.date=2023-04-01&rft.volume=130&rft.issue=3&rft.spage=807&rft.epage=840&rft.pages=807-840&rft.issn=0033-295X&rft.eissn=1939-1471&rft_id=info:doi/10.1037/rev0000379&rft_dat=%3Cproquest_cross%3E2819686900%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2720485046&rft_id=info:pmid/36190753&rfr_iscdi=true