High temporal resolution decoding of object position and category

We effortlessly and seemingly instantaneously recognize thousands of objects, although we rarely--if ever--see the same image of an object twice. The retinal image of an object can vary by context, size, viewpoint, illumination, and location. The present study examined how the visual system abstract...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of vision (Charlottesville, Va.) Va.), 2011-09, Vol.11 (10), p.9-9
Hauptverfasser: Carlson, Thomas A, Hogendoorn, Hinze, Kanai, Ryota, Mesik, Juraj, Turret, Jeremy
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 9
container_issue 10
container_start_page 9
container_title Journal of vision (Charlottesville, Va.)
container_volume 11
creator Carlson, Thomas A
Hogendoorn, Hinze
Kanai, Ryota
Mesik, Juraj
Turret, Jeremy
description We effortlessly and seemingly instantaneously recognize thousands of objects, although we rarely--if ever--see the same image of an object twice. The retinal image of an object can vary by context, size, viewpoint, illumination, and location. The present study examined how the visual system abstracts object category across variations in retinal location. In three experiments, participants viewed images of objects presented to different retinal locations while brain activity was recorded using magnetoencephalography (MEG). A pattern classifier was trained to recover the stimulus position (Experiments 1, 2, and 3) and category (Experiment 3) from the recordings. Using this decoding approach, we show that an object's location in the visual field can be recovered in high temporal resolution (5 ms) and with sufficient fidelity to capture topographic organization in visual areas. Experiment 3 showed that an object's category could be recovered from the recordings as early as 135 ms after the onset of the stimulus and that category decoding generalized across retinal location (i.e., position invariance). Our experiments thus show that the visual system rapidly constructs a category representation for objects that is invariant to position.
doi_str_mv 10.1167/11.10.9
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_890676898</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>890676898</sourcerecordid><originalsourceid>FETCH-LOGICAL-c309t-8be18867a30a0bad67ffa8ef55412326702eef344445a4a06bff2862214907433</originalsourceid><addsrcrecordid>eNpNkMtOwzAQRS0EoqUg_gB5xyplbCd-LKsKKFIlNrCOnGRcUiVxsJNF_56U9yxmjjRHd3EJuWawZEyqO8aWE5oTMmeZSBMlJD_9xzNyEeMegEMG7JzMODMcdMbmZLWpd290wLb3wTY0YPTNONS-oxWWvqq7HfWO-mKP5UB7H-vPn-0qWtoBdz4cLsmZs03Eq--7IK8P9y_rTbJ9fnxar7ZJKcAMiS6QaS2VFWChsJVUzlmNLstSxgWXCjiiE-k0mU0tyMI5riXnLDWgUiEW5PYrtw_-fcQ45G0dS2wa26EfY64NSCW10X9mGXyMAV3eh7q14ZAzyI91TeuIZjJvvjPHosXq1_vpR3wAHP9jPA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>890676898</pqid></control><display><type>article</type><title>High temporal resolution decoding of object position and category</title><source>MEDLINE</source><source>DOAJ Directory of Open Access Journals</source><source>Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals</source><source>PubMed Central</source><creator>Carlson, Thomas A ; Hogendoorn, Hinze ; Kanai, Ryota ; Mesik, Juraj ; Turret, Jeremy</creator><creatorcontrib>Carlson, Thomas A ; Hogendoorn, Hinze ; Kanai, Ryota ; Mesik, Juraj ; Turret, Jeremy</creatorcontrib><description>We effortlessly and seemingly instantaneously recognize thousands of objects, although we rarely--if ever--see the same image of an object twice. The retinal image of an object can vary by context, size, viewpoint, illumination, and location. The present study examined how the visual system abstracts object category across variations in retinal location. In three experiments, participants viewed images of objects presented to different retinal locations while brain activity was recorded using magnetoencephalography (MEG). A pattern classifier was trained to recover the stimulus position (Experiments 1, 2, and 3) and category (Experiment 3) from the recordings. Using this decoding approach, we show that an object's location in the visual field can be recovered in high temporal resolution (5 ms) and with sufficient fidelity to capture topographic organization in visual areas. Experiment 3 showed that an object's category could be recovered from the recordings as early as 135 ms after the onset of the stimulus and that category decoding generalized across retinal location (i.e., position invariance). Our experiments thus show that the visual system rapidly constructs a category representation for objects that is invariant to position.</description><identifier>ISSN: 1534-7362</identifier><identifier>EISSN: 1534-7362</identifier><identifier>DOI: 10.1167/11.10.9</identifier><identifier>PMID: 21920851</identifier><language>eng</language><publisher>United States</publisher><subject>Adult ; Female ; Form Perception - physiology ; Humans ; Male ; Pattern Recognition, Visual - physiology ; Photic Stimulation - methods ; Visual Cortex - physiology ; Visual Fields ; Young Adult</subject><ispartof>Journal of vision (Charlottesville, Va.), 2011-09, Vol.11 (10), p.9-9</ispartof><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c309t-8be18867a30a0bad67ffa8ef55412326702eef344445a4a06bff2862214907433</citedby></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>315,782,786,866,27933,27934</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/21920851$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Carlson, Thomas A</creatorcontrib><creatorcontrib>Hogendoorn, Hinze</creatorcontrib><creatorcontrib>Kanai, Ryota</creatorcontrib><creatorcontrib>Mesik, Juraj</creatorcontrib><creatorcontrib>Turret, Jeremy</creatorcontrib><title>High temporal resolution decoding of object position and category</title><title>Journal of vision (Charlottesville, Va.)</title><addtitle>J Vis</addtitle><description>We effortlessly and seemingly instantaneously recognize thousands of objects, although we rarely--if ever--see the same image of an object twice. The retinal image of an object can vary by context, size, viewpoint, illumination, and location. The present study examined how the visual system abstracts object category across variations in retinal location. In three experiments, participants viewed images of objects presented to different retinal locations while brain activity was recorded using magnetoencephalography (MEG). A pattern classifier was trained to recover the stimulus position (Experiments 1, 2, and 3) and category (Experiment 3) from the recordings. Using this decoding approach, we show that an object's location in the visual field can be recovered in high temporal resolution (5 ms) and with sufficient fidelity to capture topographic organization in visual areas. Experiment 3 showed that an object's category could be recovered from the recordings as early as 135 ms after the onset of the stimulus and that category decoding generalized across retinal location (i.e., position invariance). Our experiments thus show that the visual system rapidly constructs a category representation for objects that is invariant to position.</description><subject>Adult</subject><subject>Female</subject><subject>Form Perception - physiology</subject><subject>Humans</subject><subject>Male</subject><subject>Pattern Recognition, Visual - physiology</subject><subject>Photic Stimulation - methods</subject><subject>Visual Cortex - physiology</subject><subject>Visual Fields</subject><subject>Young Adult</subject><issn>1534-7362</issn><issn>1534-7362</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2011</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><recordid>eNpNkMtOwzAQRS0EoqUg_gB5xyplbCd-LKsKKFIlNrCOnGRcUiVxsJNF_56U9yxmjjRHd3EJuWawZEyqO8aWE5oTMmeZSBMlJD_9xzNyEeMegEMG7JzMODMcdMbmZLWpd290wLb3wTY0YPTNONS-oxWWvqq7HfWO-mKP5UB7H-vPn-0qWtoBdz4cLsmZs03Eq--7IK8P9y_rTbJ9fnxar7ZJKcAMiS6QaS2VFWChsJVUzlmNLstSxgWXCjiiE-k0mU0tyMI5riXnLDWgUiEW5PYrtw_-fcQ45G0dS2wa26EfY64NSCW10X9mGXyMAV3eh7q14ZAzyI91TeuIZjJvvjPHosXq1_vpR3wAHP9jPA</recordid><startdate>20110915</startdate><enddate>20110915</enddate><creator>Carlson, Thomas A</creator><creator>Hogendoorn, Hinze</creator><creator>Kanai, Ryota</creator><creator>Mesik, Juraj</creator><creator>Turret, Jeremy</creator><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope></search><sort><creationdate>20110915</creationdate><title>High temporal resolution decoding of object position and category</title><author>Carlson, Thomas A ; Hogendoorn, Hinze ; Kanai, Ryota ; Mesik, Juraj ; Turret, Jeremy</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c309t-8be18867a30a0bad67ffa8ef55412326702eef344445a4a06bff2862214907433</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2011</creationdate><topic>Adult</topic><topic>Female</topic><topic>Form Perception - physiology</topic><topic>Humans</topic><topic>Male</topic><topic>Pattern Recognition, Visual - physiology</topic><topic>Photic Stimulation - methods</topic><topic>Visual Cortex - physiology</topic><topic>Visual Fields</topic><topic>Young Adult</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Carlson, Thomas A</creatorcontrib><creatorcontrib>Hogendoorn, Hinze</creatorcontrib><creatorcontrib>Kanai, Ryota</creatorcontrib><creatorcontrib>Mesik, Juraj</creatorcontrib><creatorcontrib>Turret, Jeremy</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>Journal of vision (Charlottesville, Va.)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Carlson, Thomas A</au><au>Hogendoorn, Hinze</au><au>Kanai, Ryota</au><au>Mesik, Juraj</au><au>Turret, Jeremy</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>High temporal resolution decoding of object position and category</atitle><jtitle>Journal of vision (Charlottesville, Va.)</jtitle><addtitle>J Vis</addtitle><date>2011-09-15</date><risdate>2011</risdate><volume>11</volume><issue>10</issue><spage>9</spage><epage>9</epage><pages>9-9</pages><issn>1534-7362</issn><eissn>1534-7362</eissn><abstract>We effortlessly and seemingly instantaneously recognize thousands of objects, although we rarely--if ever--see the same image of an object twice. The retinal image of an object can vary by context, size, viewpoint, illumination, and location. The present study examined how the visual system abstracts object category across variations in retinal location. In three experiments, participants viewed images of objects presented to different retinal locations while brain activity was recorded using magnetoencephalography (MEG). A pattern classifier was trained to recover the stimulus position (Experiments 1, 2, and 3) and category (Experiment 3) from the recordings. Using this decoding approach, we show that an object's location in the visual field can be recovered in high temporal resolution (5 ms) and with sufficient fidelity to capture topographic organization in visual areas. Experiment 3 showed that an object's category could be recovered from the recordings as early as 135 ms after the onset of the stimulus and that category decoding generalized across retinal location (i.e., position invariance). Our experiments thus show that the visual system rapidly constructs a category representation for objects that is invariant to position.</abstract><cop>United States</cop><pmid>21920851</pmid><doi>10.1167/11.10.9</doi><tpages>1</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1534-7362
ispartof Journal of vision (Charlottesville, Va.), 2011-09, Vol.11 (10), p.9-9
issn 1534-7362
1534-7362
language eng
recordid cdi_proquest_miscellaneous_890676898
source MEDLINE; DOAJ Directory of Open Access Journals; Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals; PubMed Central
subjects Adult
Female
Form Perception - physiology
Humans
Male
Pattern Recognition, Visual - physiology
Photic Stimulation - methods
Visual Cortex - physiology
Visual Fields
Young Adult
title High temporal resolution decoding of object position and category
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-01T03%3A29%3A44IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=High%20temporal%20resolution%20decoding%20of%20object%20position%20and%20category&rft.jtitle=Journal%20of%20vision%20(Charlottesville,%20Va.)&rft.au=Carlson,%20Thomas%20A&rft.date=2011-09-15&rft.volume=11&rft.issue=10&rft.spage=9&rft.epage=9&rft.pages=9-9&rft.issn=1534-7362&rft.eissn=1534-7362&rft_id=info:doi/10.1167/11.10.9&rft_dat=%3Cproquest_cross%3E890676898%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=890676898&rft_id=info:pmid/21920851&rfr_iscdi=true