A dynamical scan path model for task-dependence during scene viewing

In real-world scene perception human observers generate sequences of fixations to move image patches into the high-acuity center of the visual field. Models of visual attention developed over the last 25 years aim to predict two-dimensional probabilities of gaze positions for a given image via salie...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2022-08
Hauptverfasser: Schwetlick, Lisa, Backhaus, Daniel, Engbert, Ralf
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Schwetlick, Lisa
Backhaus, Daniel
Engbert, Ralf
description In real-world scene perception human observers generate sequences of fixations to move image patches into the high-acuity center of the visual field. Models of visual attention developed over the last 25 years aim to predict two-dimensional probabilities of gaze positions for a given image via saliency maps. Recently, progress has been made on models for the generation of scan paths under the constraints of saliency as well as attentional and oculomotor restrictions. Experimental research demonstrated that task constraints can have a strong impact on viewing behavior. Here we propose a scan path model for both fixation positions and fixation durations, which includes influences of task instructions and interindividual differences. Based on an eye-movement experiment with four different task conditions, we estimated model parameters for each individual observer and task condition using a fully Bayesian dynamical modeling framework using a joint spatial-temporal likelihood approach with sequential estimation. Resulting parameter values demonstrate that model properties such as the attentional span are adjusted to task requirements. Posterior predictive checks indicate that our dynamical model can reproduce task differences in scan path statistics across individual observers.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2612566996</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2612566996</sourcerecordid><originalsourceid>FETCH-proquest_journals_26125669963</originalsourceid><addsrcrecordid>eNqNiksKwjAUAIMgWLR3eOC60L7YaJfiBw_gvoTmVVvbl5o0ire3Cw_gahhmZiJCKbNkt0FciNj7Nk1TVFvMcxmJ4x7Mh3XfVLoDX2mGQY936K2hDmrrYNT-kRgaiA1xRWCCa_g2rcQEr4bek63EvNadp_jHpVifT9fDJRmcfQbyY9na4HhKJaoMc6WKQsn_ri8Txjpp</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2612566996</pqid></control><display><type>article</type><title>A dynamical scan path model for task-dependence during scene viewing</title><source>Free E- Journals</source><creator>Schwetlick, Lisa ; Backhaus, Daniel ; Engbert, Ralf</creator><creatorcontrib>Schwetlick, Lisa ; Backhaus, Daniel ; Engbert, Ralf</creatorcontrib><description>In real-world scene perception human observers generate sequences of fixations to move image patches into the high-acuity center of the visual field. Models of visual attention developed over the last 25 years aim to predict two-dimensional probabilities of gaze positions for a given image via saliency maps. Recently, progress has been made on models for the generation of scan paths under the constraints of saliency as well as attentional and oculomotor restrictions. Experimental research demonstrated that task constraints can have a strong impact on viewing behavior. Here we propose a scan path model for both fixation positions and fixation durations, which includes influences of task instructions and interindividual differences. Based on an eye-movement experiment with four different task conditions, we estimated model parameters for each individual observer and task condition using a fully Bayesian dynamical modeling framework using a joint spatial-temporal likelihood approach with sequential estimation. Resulting parameter values demonstrate that model properties such as the attentional span are adjusted to task requirements. Posterior predictive checks indicate that our dynamical model can reproduce task differences in scan path statistics across individual observers.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Acuity ; Dynamic models ; Eye movements ; Fixation ; Observers ; Parameters ; Salience ; Viewing ; Visual fields</subject><ispartof>arXiv.org, 2022-08</ispartof><rights>2022. This work is published under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>777,781</link.rule.ids></links><search><creatorcontrib>Schwetlick, Lisa</creatorcontrib><creatorcontrib>Backhaus, Daniel</creatorcontrib><creatorcontrib>Engbert, Ralf</creatorcontrib><title>A dynamical scan path model for task-dependence during scene viewing</title><title>arXiv.org</title><description>In real-world scene perception human observers generate sequences of fixations to move image patches into the high-acuity center of the visual field. Models of visual attention developed over the last 25 years aim to predict two-dimensional probabilities of gaze positions for a given image via saliency maps. Recently, progress has been made on models for the generation of scan paths under the constraints of saliency as well as attentional and oculomotor restrictions. Experimental research demonstrated that task constraints can have a strong impact on viewing behavior. Here we propose a scan path model for both fixation positions and fixation durations, which includes influences of task instructions and interindividual differences. Based on an eye-movement experiment with four different task conditions, we estimated model parameters for each individual observer and task condition using a fully Bayesian dynamical modeling framework using a joint spatial-temporal likelihood approach with sequential estimation. Resulting parameter values demonstrate that model properties such as the attentional span are adjusted to task requirements. Posterior predictive checks indicate that our dynamical model can reproduce task differences in scan path statistics across individual observers.</description><subject>Acuity</subject><subject>Dynamic models</subject><subject>Eye movements</subject><subject>Fixation</subject><subject>Observers</subject><subject>Parameters</subject><subject>Salience</subject><subject>Viewing</subject><subject>Visual fields</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNiksKwjAUAIMgWLR3eOC60L7YaJfiBw_gvoTmVVvbl5o0ire3Cw_gahhmZiJCKbNkt0FciNj7Nk1TVFvMcxmJ4x7Mh3XfVLoDX2mGQY936K2hDmrrYNT-kRgaiA1xRWCCa_g2rcQEr4bek63EvNadp_jHpVifT9fDJRmcfQbyY9na4HhKJaoMc6WKQsn_ri8Txjpp</recordid><startdate>20220812</startdate><enddate>20220812</enddate><creator>Schwetlick, Lisa</creator><creator>Backhaus, Daniel</creator><creator>Engbert, Ralf</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20220812</creationdate><title>A dynamical scan path model for task-dependence during scene viewing</title><author>Schwetlick, Lisa ; Backhaus, Daniel ; Engbert, Ralf</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_26125669963</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Acuity</topic><topic>Dynamic models</topic><topic>Eye movements</topic><topic>Fixation</topic><topic>Observers</topic><topic>Parameters</topic><topic>Salience</topic><topic>Viewing</topic><topic>Visual fields</topic><toplevel>online_resources</toplevel><creatorcontrib>Schwetlick, Lisa</creatorcontrib><creatorcontrib>Backhaus, Daniel</creatorcontrib><creatorcontrib>Engbert, Ralf</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Schwetlick, Lisa</au><au>Backhaus, Daniel</au><au>Engbert, Ralf</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>A dynamical scan path model for task-dependence during scene viewing</atitle><jtitle>arXiv.org</jtitle><date>2022-08-12</date><risdate>2022</risdate><eissn>2331-8422</eissn><abstract>In real-world scene perception human observers generate sequences of fixations to move image patches into the high-acuity center of the visual field. Models of visual attention developed over the last 25 years aim to predict two-dimensional probabilities of gaze positions for a given image via saliency maps. Recently, progress has been made on models for the generation of scan paths under the constraints of saliency as well as attentional and oculomotor restrictions. Experimental research demonstrated that task constraints can have a strong impact on viewing behavior. Here we propose a scan path model for both fixation positions and fixation durations, which includes influences of task instructions and interindividual differences. Based on an eye-movement experiment with four different task conditions, we estimated model parameters for each individual observer and task condition using a fully Bayesian dynamical modeling framework using a joint spatial-temporal likelihood approach with sequential estimation. Resulting parameter values demonstrate that model properties such as the attentional span are adjusted to task requirements. Posterior predictive checks indicate that our dynamical model can reproduce task differences in scan path statistics across individual observers.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2022-08
issn 2331-8422
language eng
recordid cdi_proquest_journals_2612566996
source Free E- Journals
subjects Acuity
Dynamic models
Eye movements
Fixation
Observers
Parameters
Salience
Viewing
Visual fields
title A dynamical scan path model for task-dependence during scene viewing
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-18T04%3A22%3A03IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=A%20dynamical%20scan%20path%20model%20for%20task-dependence%20during%20scene%20viewing&rft.jtitle=arXiv.org&rft.au=Schwetlick,%20Lisa&rft.date=2022-08-12&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2612566996%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2612566996&rft_id=info:pmid/&rfr_iscdi=true