Eye movements in iconic visual search

Visual cognition depends critically on the moment-to-moment orientation of gaze. To change the gaze to a new location in space, that location must be computed and used by the oculomotor system. One of the most common sources of information for this computation is the visual appearance of an object....

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Vision research (Oxford) 2002-05, Vol.42 (11), p.1447-1463
Hauptverfasser: Rao, Rajesh P.N., Zelinsky, Gregory J., Hayhoe, Mary M., Ballard, Dana H.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 1463
container_issue 11
container_start_page 1447
container_title Vision research (Oxford)
container_volume 42
creator Rao, Rajesh P.N.
Zelinsky, Gregory J.
Hayhoe, Mary M.
Ballard, Dana H.
description Visual cognition depends critically on the moment-to-moment orientation of gaze. To change the gaze to a new location in space, that location must be computed and used by the oculomotor system. One of the most common sources of information for this computation is the visual appearance of an object. A crucial question is: How is the appearance information contained in the photometric array is converted into a target position? This paper proposes a such a model that accomplishes this calculation. The model uses iconic scene representations derived from oriented spatiochromatic filters at multiple scales. Visual search for a target object proceeds in a coarse-to-fine fashion with the target's largest scale filter responses being compared first. Task-relevant target locations are represented as saliency maps which are used to program eye movements. A central feature of the model is that it separates the targeting process, which changes gaze, from the decision process, which extracts information at or near the new gaze point to guide behavior. The model provides a detailed explanation for center-of-gravity saccades that have been observed in many previous experiments. In addition, the model's targeting performance has been compared with the eye movements of human subjects under identical conditions in natural visual search tasks. The results show good agreement both quantitatively (the search paths are strikingly similar) and qualitatively (the fixations of false targets are comparable).
doi_str_mv 10.1016/S0042-6989(02)00040-8
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_71790699</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0042698902000408</els_id><sourcerecordid>71790699</sourcerecordid><originalsourceid>FETCH-LOGICAL-c438t-dd981984571ef9b35b65d9eb9ea4734905c68452954ac1145163ab51f08a30443</originalsourceid><addsrcrecordid>eNqFkMtOwzAQRS0EoqXwCaBsimARsBPbsVcIVeUhVWIBrC3HmQijPIqdVOrf47QRXbKyRj4zc-cgdEnwHcGE379jTJOYSyFvcHKLQ4VjcYSmRGQiZpzyYzT9QybozPvvAGUskadoQhJMacbIFM2XW4jqdgM1NJ2PbBNZ0zbWRBvre11FHrQzX-fopNSVh4vxnaHPp-XH4iVevT2_Lh5XsaGp6OKikIJIQVlGoJR5ynLOCgm5BE2zlErMDA-_iWRUG0IoIzzVOSMlFjoNidIZut7PXbv2pwffqdp6A1WlG2h7rzKSScylDCDbg8a13jso1drZWrutIlgNftTOjxqOVzhROz9KhL6rcUGf11AcukYhAZiPgPZGV6XTjbH-wKXZcOAQ4GHPQdCxseCUNxYaA4V1YDpVtPafKL-hW37k</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>71790699</pqid></control><display><type>article</type><title>Eye movements in iconic visual search</title><source>MEDLINE</source><source>Elsevier ScienceDirect Journals</source><source>EZB-FREE-00999 freely available EZB journals</source><creator>Rao, Rajesh P.N. ; Zelinsky, Gregory J. ; Hayhoe, Mary M. ; Ballard, Dana H.</creator><creatorcontrib>Rao, Rajesh P.N. ; Zelinsky, Gregory J. ; Hayhoe, Mary M. ; Ballard, Dana H.</creatorcontrib><description>Visual cognition depends critically on the moment-to-moment orientation of gaze. To change the gaze to a new location in space, that location must be computed and used by the oculomotor system. One of the most common sources of information for this computation is the visual appearance of an object. A crucial question is: How is the appearance information contained in the photometric array is converted into a target position? This paper proposes a such a model that accomplishes this calculation. The model uses iconic scene representations derived from oriented spatiochromatic filters at multiple scales. Visual search for a target object proceeds in a coarse-to-fine fashion with the target's largest scale filter responses being compared first. Task-relevant target locations are represented as saliency maps which are used to program eye movements. A central feature of the model is that it separates the targeting process, which changes gaze, from the decision process, which extracts information at or near the new gaze point to guide behavior. The model provides a detailed explanation for center-of-gravity saccades that have been observed in many previous experiments. In addition, the model's targeting performance has been compared with the eye movements of human subjects under identical conditions in natural visual search tasks. The results show good agreement both quantitatively (the search paths are strikingly similar) and qualitatively (the fixations of false targets are comparable).</description><identifier>ISSN: 0042-6989</identifier><identifier>EISSN: 1878-5646</identifier><identifier>DOI: 10.1016/S0042-6989(02)00040-8</identifier><identifier>PMID: 12044751</identifier><identifier>CODEN: VISRAM</identifier><language>eng</language><publisher>Oxford: Elsevier Ltd</publisher><subject>Attention ; Attention - physiology ; Biological and medical sciences ; Computation ; Eye Movements - physiology ; Fixation, Ocular - physiology ; Fundamental and applied biological sciences. Psychology ; Humans ; Models, Psychological ; Perception ; Psychology. Psychoanalysis. Psychiatry ; Psychology. Psychophysiology ; Psychomotor Performance - physiology ; Saccades ; Saccades - physiology ; Vision ; Visual Perception - physiology ; Visuomotor control</subject><ispartof>Vision research (Oxford), 2002-05, Vol.42 (11), p.1447-1463</ispartof><rights>2002</rights><rights>2002 INIST-CNRS</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c438t-dd981984571ef9b35b65d9eb9ea4734905c68452954ac1145163ab51f08a30443</citedby><cites>FETCH-LOGICAL-c438t-dd981984571ef9b35b65d9eb9ea4734905c68452954ac1145163ab51f08a30443</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://dx.doi.org/10.1016/S0042-6989(02)00040-8$$EHTML$$P50$$Gelsevier$$Hfree_for_read</linktohtml><link.rule.ids>314,777,781,3537,27905,27906,45976</link.rule.ids><backlink>$$Uhttp://pascal-francis.inist.fr/vibad/index.php?action=getRecordDetail&amp;idt=13719849$$DView record in Pascal Francis$$Hfree_for_read</backlink><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/12044751$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Rao, Rajesh P.N.</creatorcontrib><creatorcontrib>Zelinsky, Gregory J.</creatorcontrib><creatorcontrib>Hayhoe, Mary M.</creatorcontrib><creatorcontrib>Ballard, Dana H.</creatorcontrib><title>Eye movements in iconic visual search</title><title>Vision research (Oxford)</title><addtitle>Vision Res</addtitle><description>Visual cognition depends critically on the moment-to-moment orientation of gaze. To change the gaze to a new location in space, that location must be computed and used by the oculomotor system. One of the most common sources of information for this computation is the visual appearance of an object. A crucial question is: How is the appearance information contained in the photometric array is converted into a target position? This paper proposes a such a model that accomplishes this calculation. The model uses iconic scene representations derived from oriented spatiochromatic filters at multiple scales. Visual search for a target object proceeds in a coarse-to-fine fashion with the target's largest scale filter responses being compared first. Task-relevant target locations are represented as saliency maps which are used to program eye movements. A central feature of the model is that it separates the targeting process, which changes gaze, from the decision process, which extracts information at or near the new gaze point to guide behavior. The model provides a detailed explanation for center-of-gravity saccades that have been observed in many previous experiments. In addition, the model's targeting performance has been compared with the eye movements of human subjects under identical conditions in natural visual search tasks. The results show good agreement both quantitatively (the search paths are strikingly similar) and qualitatively (the fixations of false targets are comparable).</description><subject>Attention</subject><subject>Attention - physiology</subject><subject>Biological and medical sciences</subject><subject>Computation</subject><subject>Eye Movements - physiology</subject><subject>Fixation, Ocular - physiology</subject><subject>Fundamental and applied biological sciences. Psychology</subject><subject>Humans</subject><subject>Models, Psychological</subject><subject>Perception</subject><subject>Psychology. Psychoanalysis. Psychiatry</subject><subject>Psychology. Psychophysiology</subject><subject>Psychomotor Performance - physiology</subject><subject>Saccades</subject><subject>Saccades - physiology</subject><subject>Vision</subject><subject>Visual Perception - physiology</subject><subject>Visuomotor control</subject><issn>0042-6989</issn><issn>1878-5646</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2002</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><recordid>eNqFkMtOwzAQRS0EoqXwCaBsimARsBPbsVcIVeUhVWIBrC3HmQijPIqdVOrf47QRXbKyRj4zc-cgdEnwHcGE379jTJOYSyFvcHKLQ4VjcYSmRGQiZpzyYzT9QybozPvvAGUskadoQhJMacbIFM2XW4jqdgM1NJ2PbBNZ0zbWRBvre11FHrQzX-fopNSVh4vxnaHPp-XH4iVevT2_Lh5XsaGp6OKikIJIQVlGoJR5ynLOCgm5BE2zlErMDA-_iWRUG0IoIzzVOSMlFjoNidIZut7PXbv2pwffqdp6A1WlG2h7rzKSScylDCDbg8a13jso1drZWrutIlgNftTOjxqOVzhROz9KhL6rcUGf11AcukYhAZiPgPZGV6XTjbH-wKXZcOAQ4GHPQdCxseCUNxYaA4V1YDpVtPafKL-hW37k</recordid><startdate>20020501</startdate><enddate>20020501</enddate><creator>Rao, Rajesh P.N.</creator><creator>Zelinsky, Gregory J.</creator><creator>Hayhoe, Mary M.</creator><creator>Ballard, Dana H.</creator><general>Elsevier Ltd</general><general>Elsevier Science</general><scope>6I.</scope><scope>AAFTH</scope><scope>IQODW</scope><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope></search><sort><creationdate>20020501</creationdate><title>Eye movements in iconic visual search</title><author>Rao, Rajesh P.N. ; Zelinsky, Gregory J. ; Hayhoe, Mary M. ; Ballard, Dana H.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c438t-dd981984571ef9b35b65d9eb9ea4734905c68452954ac1145163ab51f08a30443</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2002</creationdate><topic>Attention</topic><topic>Attention - physiology</topic><topic>Biological and medical sciences</topic><topic>Computation</topic><topic>Eye Movements - physiology</topic><topic>Fixation, Ocular - physiology</topic><topic>Fundamental and applied biological sciences. Psychology</topic><topic>Humans</topic><topic>Models, Psychological</topic><topic>Perception</topic><topic>Psychology. Psychoanalysis. Psychiatry</topic><topic>Psychology. Psychophysiology</topic><topic>Psychomotor Performance - physiology</topic><topic>Saccades</topic><topic>Saccades - physiology</topic><topic>Vision</topic><topic>Visual Perception - physiology</topic><topic>Visuomotor control</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Rao, Rajesh P.N.</creatorcontrib><creatorcontrib>Zelinsky, Gregory J.</creatorcontrib><creatorcontrib>Hayhoe, Mary M.</creatorcontrib><creatorcontrib>Ballard, Dana H.</creatorcontrib><collection>ScienceDirect Open Access Titles</collection><collection>Elsevier:ScienceDirect:Open Access</collection><collection>Pascal-Francis</collection><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>Vision research (Oxford)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Rao, Rajesh P.N.</au><au>Zelinsky, Gregory J.</au><au>Hayhoe, Mary M.</au><au>Ballard, Dana H.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Eye movements in iconic visual search</atitle><jtitle>Vision research (Oxford)</jtitle><addtitle>Vision Res</addtitle><date>2002-05-01</date><risdate>2002</risdate><volume>42</volume><issue>11</issue><spage>1447</spage><epage>1463</epage><pages>1447-1463</pages><issn>0042-6989</issn><eissn>1878-5646</eissn><coden>VISRAM</coden><abstract>Visual cognition depends critically on the moment-to-moment orientation of gaze. To change the gaze to a new location in space, that location must be computed and used by the oculomotor system. One of the most common sources of information for this computation is the visual appearance of an object. A crucial question is: How is the appearance information contained in the photometric array is converted into a target position? This paper proposes a such a model that accomplishes this calculation. The model uses iconic scene representations derived from oriented spatiochromatic filters at multiple scales. Visual search for a target object proceeds in a coarse-to-fine fashion with the target's largest scale filter responses being compared first. Task-relevant target locations are represented as saliency maps which are used to program eye movements. A central feature of the model is that it separates the targeting process, which changes gaze, from the decision process, which extracts information at or near the new gaze point to guide behavior. The model provides a detailed explanation for center-of-gravity saccades that have been observed in many previous experiments. In addition, the model's targeting performance has been compared with the eye movements of human subjects under identical conditions in natural visual search tasks. The results show good agreement both quantitatively (the search paths are strikingly similar) and qualitatively (the fixations of false targets are comparable).</abstract><cop>Oxford</cop><pub>Elsevier Ltd</pub><pmid>12044751</pmid><doi>10.1016/S0042-6989(02)00040-8</doi><tpages>17</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0042-6989
ispartof Vision research (Oxford), 2002-05, Vol.42 (11), p.1447-1463
issn 0042-6989
1878-5646
language eng
recordid cdi_proquest_miscellaneous_71790699
source MEDLINE; Elsevier ScienceDirect Journals; EZB-FREE-00999 freely available EZB journals
subjects Attention
Attention - physiology
Biological and medical sciences
Computation
Eye Movements - physiology
Fixation, Ocular - physiology
Fundamental and applied biological sciences. Psychology
Humans
Models, Psychological
Perception
Psychology. Psychoanalysis. Psychiatry
Psychology. Psychophysiology
Psychomotor Performance - physiology
Saccades
Saccades - physiology
Vision
Visual Perception - physiology
Visuomotor control
title Eye movements in iconic visual search
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-20T11%3A11%3A49IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Eye%20movements%20in%20iconic%20visual%20search&rft.jtitle=Vision%20research%20(Oxford)&rft.au=Rao,%20Rajesh%20P.N.&rft.date=2002-05-01&rft.volume=42&rft.issue=11&rft.spage=1447&rft.epage=1463&rft.pages=1447-1463&rft.issn=0042-6989&rft.eissn=1878-5646&rft.coden=VISRAM&rft_id=info:doi/10.1016/S0042-6989(02)00040-8&rft_dat=%3Cproquest_cross%3E71790699%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=71790699&rft_id=info:pmid/12044751&rft_els_id=S0042698902000408&rfr_iscdi=true