Predicting visual search performance by quantifying stimuli similarities
The effect of distractor homogeneity and target-distractor similarity on visual search was previously explored under two models designed for computer vision. We extend these models here to account for internal noise and to evaluate their ability to predict human performance. In four experiments, obs...
Gespeichert in:
Veröffentlicht in: | Journal of vision (Charlottesville, Va.) Va.), 2008-04, Vol.8 (4), p.9.1-922 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 922 |
---|---|
container_issue | 4 |
container_start_page | 9.1 |
container_title | Journal of vision (Charlottesville, Va.) |
container_volume | 8 |
creator | Avraham, Tamar Yeshurun, Yaffa Lindenbaum, Michael |
description | The effect of distractor homogeneity and target-distractor similarity on visual search was previously explored under two models designed for computer vision. We extend these models here to account for internal noise and to evaluate their ability to predict human performance. In four experiments, observers searched for a horizontal target among distractors of different orientation (orientation search; Experiments 1 and 2) or a gray target among distractors of different color (color search; Experiments 3 and 4). Distractor homogeneity and target-distractor similarity were systematically manipulated. We then tested our models' ability to predict the search performance of human observers. Our models' predictions were closer to human performance than those of other prominent quantitative models. |
doi_str_mv | 10.1167/8.4.9 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_70753204</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>70753204</sourcerecordid><originalsourceid>FETCH-LOGICAL-c306t-423f08fafb2bf1ec4a28c1372de3de6e2072f3d28cba35506971813ec904aefa3</originalsourceid><addsrcrecordid>eNpN0E1Lw0AQBuBFFKu1f0Fy0VvqfiWbHKWoFQp60PMy2czqSj7anUTov7e1BWUOMwwP7-FlbCb4XIjc3BVzPS9P2IXIlE6NyuXpv3vCLom-OJc84-KcTUShf-eCLV8j1sENoftIvgON0CSEEN1nssbo-9hC5zCptslmhG4IfruHNIR2bEJCoQ0NxDAEpCt25qEhnB33lL0_Prwtlunq5el5cb9KneL5kGqpPC88-EpWXqDTIAsnlJE1qhpzlNxIr-rdswKVZTwvjSiEQldyDehBTdntIXcd-82INNg2kMOmgQ77kazhJlOS6x28OUAXe6KI3q5jaCFureB2X5ktrLblzl0fA8eqxfpPHTtSPzlWZwE</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>70753204</pqid></control><display><type>article</type><title>Predicting visual search performance by quantifying stimuli similarities</title><source>MEDLINE</source><source>DOAJ Directory of Open Access Journals</source><source>EZB-FREE-00999 freely available EZB journals</source><creator>Avraham, Tamar ; Yeshurun, Yaffa ; Lindenbaum, Michael</creator><creatorcontrib>Avraham, Tamar ; Yeshurun, Yaffa ; Lindenbaum, Michael</creatorcontrib><description>The effect of distractor homogeneity and target-distractor similarity on visual search was previously explored under two models designed for computer vision. We extend these models here to account for internal noise and to evaluate their ability to predict human performance. In four experiments, observers searched for a horizontal target among distractors of different orientation (orientation search; Experiments 1 and 2) or a gray target among distractors of different color (color search; Experiments 3 and 4). Distractor homogeneity and target-distractor similarity were systematically manipulated. We then tested our models' ability to predict the search performance of human observers. Our models' predictions were closer to human performance than those of other prominent quantitative models.</description><identifier>ISSN: 1534-7362</identifier><identifier>EISSN: 1534-7362</identifier><identifier>DOI: 10.1167/8.4.9</identifier><identifier>PMID: 18484848</identifier><language>eng</language><publisher>United States</publisher><subject>Attention - physiology ; Humans ; Models, Psychological ; Orientation - physiology ; Photic Stimulation ; Psychophysics ; Visual Perception - physiology</subject><ispartof>Journal of vision (Charlottesville, Va.), 2008-04, Vol.8 (4), p.9.1-922</ispartof><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c306t-423f08fafb2bf1ec4a28c1372de3de6e2072f3d28cba35506971813ec904aefa3</citedby></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,864,27923,27924</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/18484848$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Avraham, Tamar</creatorcontrib><creatorcontrib>Yeshurun, Yaffa</creatorcontrib><creatorcontrib>Lindenbaum, Michael</creatorcontrib><title>Predicting visual search performance by quantifying stimuli similarities</title><title>Journal of vision (Charlottesville, Va.)</title><addtitle>J Vis</addtitle><description>The effect of distractor homogeneity and target-distractor similarity on visual search was previously explored under two models designed for computer vision. We extend these models here to account for internal noise and to evaluate their ability to predict human performance. In four experiments, observers searched for a horizontal target among distractors of different orientation (orientation search; Experiments 1 and 2) or a gray target among distractors of different color (color search; Experiments 3 and 4). Distractor homogeneity and target-distractor similarity were systematically manipulated. We then tested our models' ability to predict the search performance of human observers. Our models' predictions were closer to human performance than those of other prominent quantitative models.</description><subject>Attention - physiology</subject><subject>Humans</subject><subject>Models, Psychological</subject><subject>Orientation - physiology</subject><subject>Photic Stimulation</subject><subject>Psychophysics</subject><subject>Visual Perception - physiology</subject><issn>1534-7362</issn><issn>1534-7362</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2008</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><recordid>eNpN0E1Lw0AQBuBFFKu1f0Fy0VvqfiWbHKWoFQp60PMy2czqSj7anUTov7e1BWUOMwwP7-FlbCb4XIjc3BVzPS9P2IXIlE6NyuXpv3vCLom-OJc84-KcTUShf-eCLV8j1sENoftIvgON0CSEEN1nssbo-9hC5zCptslmhG4IfruHNIR2bEJCoQ0NxDAEpCt25qEhnB33lL0_Prwtlunq5el5cb9KneL5kGqpPC88-EpWXqDTIAsnlJE1qhpzlNxIr-rdswKVZTwvjSiEQldyDehBTdntIXcd-82INNg2kMOmgQ77kazhJlOS6x28OUAXe6KI3q5jaCFureB2X5ktrLblzl0fA8eqxfpPHTtSPzlWZwE</recordid><startdate>20080417</startdate><enddate>20080417</enddate><creator>Avraham, Tamar</creator><creator>Yeshurun, Yaffa</creator><creator>Lindenbaum, Michael</creator><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope></search><sort><creationdate>20080417</creationdate><title>Predicting visual search performance by quantifying stimuli similarities</title><author>Avraham, Tamar ; Yeshurun, Yaffa ; Lindenbaum, Michael</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c306t-423f08fafb2bf1ec4a28c1372de3de6e2072f3d28cba35506971813ec904aefa3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2008</creationdate><topic>Attention - physiology</topic><topic>Humans</topic><topic>Models, Psychological</topic><topic>Orientation - physiology</topic><topic>Photic Stimulation</topic><topic>Psychophysics</topic><topic>Visual Perception - physiology</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Avraham, Tamar</creatorcontrib><creatorcontrib>Yeshurun, Yaffa</creatorcontrib><creatorcontrib>Lindenbaum, Michael</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>Journal of vision (Charlottesville, Va.)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Avraham, Tamar</au><au>Yeshurun, Yaffa</au><au>Lindenbaum, Michael</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Predicting visual search performance by quantifying stimuli similarities</atitle><jtitle>Journal of vision (Charlottesville, Va.)</jtitle><addtitle>J Vis</addtitle><date>2008-04-17</date><risdate>2008</risdate><volume>8</volume><issue>4</issue><spage>9.1</spage><epage>922</epage><pages>9.1-922</pages><issn>1534-7362</issn><eissn>1534-7362</eissn><abstract>The effect of distractor homogeneity and target-distractor similarity on visual search was previously explored under two models designed for computer vision. We extend these models here to account for internal noise and to evaluate their ability to predict human performance. In four experiments, observers searched for a horizontal target among distractors of different orientation (orientation search; Experiments 1 and 2) or a gray target among distractors of different color (color search; Experiments 3 and 4). Distractor homogeneity and target-distractor similarity were systematically manipulated. We then tested our models' ability to predict the search performance of human observers. Our models' predictions were closer to human performance than those of other prominent quantitative models.</abstract><cop>United States</cop><pmid>18484848</pmid><doi>10.1167/8.4.9</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1534-7362 |
ispartof | Journal of vision (Charlottesville, Va.), 2008-04, Vol.8 (4), p.9.1-922 |
issn | 1534-7362 1534-7362 |
language | eng |
recordid | cdi_proquest_miscellaneous_70753204 |
source | MEDLINE; DOAJ Directory of Open Access Journals; EZB-FREE-00999 freely available EZB journals |
subjects | Attention - physiology Humans Models, Psychological Orientation - physiology Photic Stimulation Psychophysics Visual Perception - physiology |
title | Predicting visual search performance by quantifying stimuli similarities |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-11T12%3A42%3A49IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Predicting%20visual%20search%20performance%20by%20quantifying%20stimuli%20similarities&rft.jtitle=Journal%20of%20vision%20(Charlottesville,%20Va.)&rft.au=Avraham,%20Tamar&rft.date=2008-04-17&rft.volume=8&rft.issue=4&rft.spage=9.1&rft.epage=922&rft.pages=9.1-922&rft.issn=1534-7362&rft.eissn=1534-7362&rft_id=info:doi/10.1167/8.4.9&rft_dat=%3Cproquest_cross%3E70753204%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=70753204&rft_id=info:pmid/18484848&rfr_iscdi=true |