Basic-level visual similarity and category specificity

The role of visual crowding in category deficits has been widely discussed (e.g., Humphreys, Riddoch, & Quinlan, 1988; Laws & Gale, 2002; Tranel, Logan, Frank, & Damasio, 1997). Most studies have measured overlap at the superordinate level (compare different examples of ‘animal’) rather...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Brain and cognition 2003-11, Vol.53 (2), p.229-231
Hauptverfasser: Gale, Tim M., Laws, Keith R., Frank, Ray J., Leeson, Verity C.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 231
container_issue 2
container_start_page 229
container_title Brain and cognition
container_volume 53
creator Gale, Tim M.
Laws, Keith R.
Frank, Ray J.
Leeson, Verity C.
description The role of visual crowding in category deficits has been widely discussed (e.g., Humphreys, Riddoch, & Quinlan, 1988; Laws & Gale, 2002; Tranel, Logan, Frank, & Damasio, 1997). Most studies have measured overlap at the superordinate level (compare different examples of ‘animal’) rather than at the basic level (compare different examples of ‘dog’). In this study, we therefore derived two measures of basic-level overlap for a range of categories. The first was a computational measure generated by a self-organising neural network trained to process pictures of living and non-living things; the second was a rating of perceived visual similarity generated by human subjects to the item names. The computational measure indicated that the pattern of crowded/uncrowded does not honour a living/non-living distinction. Nevertheless, different superordinates showed varied degrees of basic-level overlap, suggesting that specific token choice affects some superordinates more than others e.g., individual fruit and vegetable tokens show greater variability than any other items, while tools and vehicles produce more reliable or overlapping basic-level visual representations. Finally, subject ratings correlated significantly with the computational measures indicating that the neural model represents structural properties of the objects that are psychologically meaningful.
doi_str_mv 10.1016/S0278-2626(03)00115-5
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_71359790</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0278262603001155</els_id><sourcerecordid>71359790</sourcerecordid><originalsourceid>FETCH-LOGICAL-c392t-f9aa41fb63edd6a56b546a5966798fa9691eb99adf2103a5a7b84ab8e234996a3</originalsourceid><addsrcrecordid>eNqFkMtOwzAQRS0EoqXwCaCsECwCYzt26hWCipdUiQWwtibOBBmlTbGTSv170odg2dUs5ty5msPYOYcbDlzfvoPIx6nQQl-BvAbgXKXqgA05GEgFz_JDNvxDBuwkxm8AMJkQx2zAMw05V3LI9ANG79KallQnSx87rJPoZ77G4NtVgvMycdjSVxNWSVyQ85V3_eKUHVVYRzrbzRH7fHr8mLyk07fn18n9NHXSiDatDGLGq0JLKkuNShcq64fROjfjCo02nApjsKwEB4kK82KcYTEmITNjNMoRu9zeXYTmp6PY2pmPjuoa59R00eZcKpMb2AsKUP3DIHpQbUEXmhgDVXYR_AzDynKwa7N2Y9autVmQdmPWqj53sSvoihmV_6mdyh642wLU-1h6CjY6T3NHpQ_kWls2fk_FLzNeh4M</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>20571502</pqid></control><display><type>article</type><title>Basic-level visual similarity and category specificity</title><source>MEDLINE</source><source>Elsevier ScienceDirect Journals</source><creator>Gale, Tim M. ; Laws, Keith R. ; Frank, Ray J. ; Leeson, Verity C.</creator><creatorcontrib>Gale, Tim M. ; Laws, Keith R. ; Frank, Ray J. ; Leeson, Verity C.</creatorcontrib><description>The role of visual crowding in category deficits has been widely discussed (e.g., Humphreys, Riddoch, &amp; Quinlan, 1988; Laws &amp; Gale, 2002; Tranel, Logan, Frank, &amp; Damasio, 1997). Most studies have measured overlap at the superordinate level (compare different examples of ‘animal’) rather than at the basic level (compare different examples of ‘dog’). In this study, we therefore derived two measures of basic-level overlap for a range of categories. The first was a computational measure generated by a self-organising neural network trained to process pictures of living and non-living things; the second was a rating of perceived visual similarity generated by human subjects to the item names. The computational measure indicated that the pattern of crowded/uncrowded does not honour a living/non-living distinction. Nevertheless, different superordinates showed varied degrees of basic-level overlap, suggesting that specific token choice affects some superordinates more than others e.g., individual fruit and vegetable tokens show greater variability than any other items, while tools and vehicles produce more reliable or overlapping basic-level visual representations. Finally, subject ratings correlated significantly with the computational measures indicating that the neural model represents structural properties of the objects that are psychologically meaningful.</description><identifier>ISSN: 0278-2626</identifier><identifier>EISSN: 1090-2147</identifier><identifier>DOI: 10.1016/S0278-2626(03)00115-5</identifier><identifier>PMID: 14607153</identifier><language>eng</language><publisher>United States: Elsevier Inc</publisher><subject>Humans ; Learning ; Neural Networks (Computer) ; Random Allocation ; Sensitivity and Specificity ; Visual Perception</subject><ispartof>Brain and cognition, 2003-11, Vol.53 (2), p.229-231</ispartof><rights>2003 Elsevier Inc.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c392t-f9aa41fb63edd6a56b546a5966798fa9691eb99adf2103a5a7b84ab8e234996a3</citedby><cites>FETCH-LOGICAL-c392t-f9aa41fb63edd6a56b546a5966798fa9691eb99adf2103a5a7b84ab8e234996a3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://dx.doi.org/10.1016/S0278-2626(03)00115-5$$EHTML$$P50$$Gelsevier$$H</linktohtml><link.rule.ids>314,777,781,3537,27905,27906,45976</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/14607153$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Gale, Tim M.</creatorcontrib><creatorcontrib>Laws, Keith R.</creatorcontrib><creatorcontrib>Frank, Ray J.</creatorcontrib><creatorcontrib>Leeson, Verity C.</creatorcontrib><title>Basic-level visual similarity and category specificity</title><title>Brain and cognition</title><addtitle>Brain Cogn</addtitle><description>The role of visual crowding in category deficits has been widely discussed (e.g., Humphreys, Riddoch, &amp; Quinlan, 1988; Laws &amp; Gale, 2002; Tranel, Logan, Frank, &amp; Damasio, 1997). Most studies have measured overlap at the superordinate level (compare different examples of ‘animal’) rather than at the basic level (compare different examples of ‘dog’). In this study, we therefore derived two measures of basic-level overlap for a range of categories. The first was a computational measure generated by a self-organising neural network trained to process pictures of living and non-living things; the second was a rating of perceived visual similarity generated by human subjects to the item names. The computational measure indicated that the pattern of crowded/uncrowded does not honour a living/non-living distinction. Nevertheless, different superordinates showed varied degrees of basic-level overlap, suggesting that specific token choice affects some superordinates more than others e.g., individual fruit and vegetable tokens show greater variability than any other items, while tools and vehicles produce more reliable or overlapping basic-level visual representations. Finally, subject ratings correlated significantly with the computational measures indicating that the neural model represents structural properties of the objects that are psychologically meaningful.</description><subject>Humans</subject><subject>Learning</subject><subject>Neural Networks (Computer)</subject><subject>Random Allocation</subject><subject>Sensitivity and Specificity</subject><subject>Visual Perception</subject><issn>0278-2626</issn><issn>1090-2147</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2003</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><recordid>eNqFkMtOwzAQRS0EoqXwCaCsECwCYzt26hWCipdUiQWwtibOBBmlTbGTSv170odg2dUs5ty5msPYOYcbDlzfvoPIx6nQQl-BvAbgXKXqgA05GEgFz_JDNvxDBuwkxm8AMJkQx2zAMw05V3LI9ANG79KallQnSx87rJPoZ77G4NtVgvMycdjSVxNWSVyQ85V3_eKUHVVYRzrbzRH7fHr8mLyk07fn18n9NHXSiDatDGLGq0JLKkuNShcq64fROjfjCo02nApjsKwEB4kK82KcYTEmITNjNMoRu9zeXYTmp6PY2pmPjuoa59R00eZcKpMb2AsKUP3DIHpQbUEXmhgDVXYR_AzDynKwa7N2Y9autVmQdmPWqj53sSvoihmV_6mdyh642wLU-1h6CjY6T3NHpQ_kWls2fk_FLzNeh4M</recordid><startdate>20031101</startdate><enddate>20031101</enddate><creator>Gale, Tim M.</creator><creator>Laws, Keith R.</creator><creator>Frank, Ray J.</creator><creator>Leeson, Verity C.</creator><general>Elsevier Inc</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7TK</scope><scope>7X8</scope><scope>8BM</scope></search><sort><creationdate>20031101</creationdate><title>Basic-level visual similarity and category specificity</title><author>Gale, Tim M. ; Laws, Keith R. ; Frank, Ray J. ; Leeson, Verity C.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c392t-f9aa41fb63edd6a56b546a5966798fa9691eb99adf2103a5a7b84ab8e234996a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2003</creationdate><topic>Humans</topic><topic>Learning</topic><topic>Neural Networks (Computer)</topic><topic>Random Allocation</topic><topic>Sensitivity and Specificity</topic><topic>Visual Perception</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Gale, Tim M.</creatorcontrib><creatorcontrib>Laws, Keith R.</creatorcontrib><creatorcontrib>Frank, Ray J.</creatorcontrib><creatorcontrib>Leeson, Verity C.</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Neurosciences Abstracts</collection><collection>MEDLINE - Academic</collection><collection>ComDisDome</collection><jtitle>Brain and cognition</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Gale, Tim M.</au><au>Laws, Keith R.</au><au>Frank, Ray J.</au><au>Leeson, Verity C.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Basic-level visual similarity and category specificity</atitle><jtitle>Brain and cognition</jtitle><addtitle>Brain Cogn</addtitle><date>2003-11-01</date><risdate>2003</risdate><volume>53</volume><issue>2</issue><spage>229</spage><epage>231</epage><pages>229-231</pages><issn>0278-2626</issn><eissn>1090-2147</eissn><abstract>The role of visual crowding in category deficits has been widely discussed (e.g., Humphreys, Riddoch, &amp; Quinlan, 1988; Laws &amp; Gale, 2002; Tranel, Logan, Frank, &amp; Damasio, 1997). Most studies have measured overlap at the superordinate level (compare different examples of ‘animal’) rather than at the basic level (compare different examples of ‘dog’). In this study, we therefore derived two measures of basic-level overlap for a range of categories. The first was a computational measure generated by a self-organising neural network trained to process pictures of living and non-living things; the second was a rating of perceived visual similarity generated by human subjects to the item names. The computational measure indicated that the pattern of crowded/uncrowded does not honour a living/non-living distinction. Nevertheless, different superordinates showed varied degrees of basic-level overlap, suggesting that specific token choice affects some superordinates more than others e.g., individual fruit and vegetable tokens show greater variability than any other items, while tools and vehicles produce more reliable or overlapping basic-level visual representations. Finally, subject ratings correlated significantly with the computational measures indicating that the neural model represents structural properties of the objects that are psychologically meaningful.</abstract><cop>United States</cop><pub>Elsevier Inc</pub><pmid>14607153</pmid><doi>10.1016/S0278-2626(03)00115-5</doi><tpages>3</tpages></addata></record>
fulltext fulltext
identifier ISSN: 0278-2626
ispartof Brain and cognition, 2003-11, Vol.53 (2), p.229-231
issn 0278-2626
1090-2147
language eng
recordid cdi_proquest_miscellaneous_71359790
source MEDLINE; Elsevier ScienceDirect Journals
subjects Humans
Learning
Neural Networks (Computer)
Random Allocation
Sensitivity and Specificity
Visual Perception
title Basic-level visual similarity and category specificity
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-21T09%3A15%3A38IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Basic-level%20visual%20similarity%20and%20category%20specificity&rft.jtitle=Brain%20and%20cognition&rft.au=Gale,%20Tim%20M.&rft.date=2003-11-01&rft.volume=53&rft.issue=2&rft.spage=229&rft.epage=231&rft.pages=229-231&rft.issn=0278-2626&rft.eissn=1090-2147&rft_id=info:doi/10.1016/S0278-2626(03)00115-5&rft_dat=%3Cproquest_cross%3E71359790%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=20571502&rft_id=info:pmid/14607153&rft_els_id=S0278262603001155&rfr_iscdi=true