Comparison of view-based and reconstruction-based models of human navigational strategy

There is good evidence that simple animals, such as bees, use view-based strategies to return to a familiar location, whereas humans might use a 3-D reconstruction to achieve the same goal. Assuming some noise in the storage and retrieval process, these two types of strategy give rise to different p...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of vision (Charlottesville, Va.) Va.), 2017-08, Vol.17 (9), p.11-11
Hauptverfasser: Gootjes-Dreesbach, Luise, Pickup, Lyndsey C, Fitzgibbon, Andrew W, Glennerster, Andrew
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 11
container_issue 9
container_start_page 11
container_title Journal of vision (Charlottesville, Va.)
container_volume 17
creator Gootjes-Dreesbach, Luise
Pickup, Lyndsey C
Fitzgibbon, Andrew W
Glennerster, Andrew
description There is good evidence that simple animals, such as bees, use view-based strategies to return to a familiar location, whereas humans might use a 3-D reconstruction to achieve the same goal. Assuming some noise in the storage and retrieval process, these two types of strategy give rise to different patterns of predicted errors in homing. We describe an experiment that can help distinguish between these models. Participants wore a head-mounted display to carry out a homing task in immersive virtual reality. They viewed three long, thin, vertical poles and had to remember where they were in relation to the poles before being transported (virtually) to a new location in the scene from where they had to walk back to the original location. The experiment was conducted in both a rich-cue scene (a furnished room) and a sparse scene (no background and no floor or ceiling). As one would expect, in a rich-cue environment, the overall error was smaller, and in this case, the ability to separate the models was reduced. However, for the sparse-cue environment, the view-based model outperforms the reconstruction-based model. Specifically, the likelihood of the experimental data is similar to the likelihood of samples drawn from the view-based model (but assessed under both models), and this is not true for samples drawn from the reconstruction-based model.
doi_str_mv 10.1167/17.9.11
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_1932142495</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1932142495</sourcerecordid><originalsourceid>FETCH-LOGICAL-c314t-90ba43c9b1852817b7e3903b37accfc333e31dca55479fa80b046a9b3a8670043</originalsourceid><addsrcrecordid>eNpN0E9Lw0AQBfBFFFur-A0kN72k7mQ22exRiv-g4EXxGGY3mxpJsnU3qfTbm9IqnubB_HiHx9gl8DlAJm9BztWYjtgUUhSxxCw5_pcn7CyET84TnnI4ZZMkzwHTTE7Z-8K1a_J1cF3kqmhT2-9YU7BlRF0ZeWtcF3o_mL523eHRutI2Yac_hpa6qKNNvaIdoCYaMfV2tT1nJxU1wV4c7oy9Pdy_Lp7i5cvj8-JuGRsE0ceKaxJolIY8TXKQWlpUHDVKMqYyiGgRSkNpKqSqKOeai4yURsozybnAGbvZ9669-xps6Iu2DsY2DXXWDaEAhQmIRKh0pNd7arwLwduqWPu6Jb8tgBe7FQuQhRrTKK8OpYNubfnnfmfDH4x-bFw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1932142495</pqid></control><display><type>article</type><title>Comparison of view-based and reconstruction-based models of human navigational strategy</title><source>MEDLINE</source><source>DOAJ Directory of Open Access Journals</source><source>EZB-FREE-00999 freely available EZB journals</source><source>PubMed Central</source><creator>Gootjes-Dreesbach, Luise ; Pickup, Lyndsey C ; Fitzgibbon, Andrew W ; Glennerster, Andrew</creator><creatorcontrib>Gootjes-Dreesbach, Luise ; Pickup, Lyndsey C ; Fitzgibbon, Andrew W ; Glennerster, Andrew</creatorcontrib><description>There is good evidence that simple animals, such as bees, use view-based strategies to return to a familiar location, whereas humans might use a 3-D reconstruction to achieve the same goal. Assuming some noise in the storage and retrieval process, these two types of strategy give rise to different patterns of predicted errors in homing. We describe an experiment that can help distinguish between these models. Participants wore a head-mounted display to carry out a homing task in immersive virtual reality. They viewed three long, thin, vertical poles and had to remember where they were in relation to the poles before being transported (virtually) to a new location in the scene from where they had to walk back to the original location. The experiment was conducted in both a rich-cue scene (a furnished room) and a sparse scene (no background and no floor or ceiling). As one would expect, in a rich-cue environment, the overall error was smaller, and in this case, the ability to separate the models was reduced. However, for the sparse-cue environment, the view-based model outperforms the reconstruction-based model. Specifically, the likelihood of the experimental data is similar to the likelihood of samples drawn from the view-based model (but assessed under both models), and this is not true for samples drawn from the reconstruction-based model.</description><identifier>ISSN: 1534-7362</identifier><identifier>EISSN: 1534-7362</identifier><identifier>DOI: 10.1167/17.9.11</identifier><identifier>PMID: 28813567</identifier><language>eng</language><publisher>United States</publisher><subject>Adult ; Environment ; Humans ; Likelihood Functions ; Male ; Models, Theoretical ; Visual Perception - physiology ; Young Adult</subject><ispartof>Journal of vision (Charlottesville, Va.), 2017-08, Vol.17 (9), p.11-11</ispartof><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c314t-90ba43c9b1852817b7e3903b37accfc333e31dca55479fa80b046a9b3a8670043</citedby></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,864,27924,27925</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/28813567$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Gootjes-Dreesbach, Luise</creatorcontrib><creatorcontrib>Pickup, Lyndsey C</creatorcontrib><creatorcontrib>Fitzgibbon, Andrew W</creatorcontrib><creatorcontrib>Glennerster, Andrew</creatorcontrib><title>Comparison of view-based and reconstruction-based models of human navigational strategy</title><title>Journal of vision (Charlottesville, Va.)</title><addtitle>J Vis</addtitle><description>There is good evidence that simple animals, such as bees, use view-based strategies to return to a familiar location, whereas humans might use a 3-D reconstruction to achieve the same goal. Assuming some noise in the storage and retrieval process, these two types of strategy give rise to different patterns of predicted errors in homing. We describe an experiment that can help distinguish between these models. Participants wore a head-mounted display to carry out a homing task in immersive virtual reality. They viewed three long, thin, vertical poles and had to remember where they were in relation to the poles before being transported (virtually) to a new location in the scene from where they had to walk back to the original location. The experiment was conducted in both a rich-cue scene (a furnished room) and a sparse scene (no background and no floor or ceiling). As one would expect, in a rich-cue environment, the overall error was smaller, and in this case, the ability to separate the models was reduced. However, for the sparse-cue environment, the view-based model outperforms the reconstruction-based model. Specifically, the likelihood of the experimental data is similar to the likelihood of samples drawn from the view-based model (but assessed under both models), and this is not true for samples drawn from the reconstruction-based model.</description><subject>Adult</subject><subject>Environment</subject><subject>Humans</subject><subject>Likelihood Functions</subject><subject>Male</subject><subject>Models, Theoretical</subject><subject>Visual Perception - physiology</subject><subject>Young Adult</subject><issn>1534-7362</issn><issn>1534-7362</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2017</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><recordid>eNpN0E9Lw0AQBfBFFFur-A0kN72k7mQ22exRiv-g4EXxGGY3mxpJsnU3qfTbm9IqnubB_HiHx9gl8DlAJm9BztWYjtgUUhSxxCw5_pcn7CyET84TnnI4ZZMkzwHTTE7Z-8K1a_J1cF3kqmhT2-9YU7BlRF0ZeWtcF3o_mL523eHRutI2Yac_hpa6qKNNvaIdoCYaMfV2tT1nJxU1wV4c7oy9Pdy_Lp7i5cvj8-JuGRsE0ceKaxJolIY8TXKQWlpUHDVKMqYyiGgRSkNpKqSqKOeai4yURsozybnAGbvZ9669-xps6Iu2DsY2DXXWDaEAhQmIRKh0pNd7arwLwduqWPu6Jb8tgBe7FQuQhRrTKK8OpYNubfnnfmfDH4x-bFw</recordid><startdate>20170801</startdate><enddate>20170801</enddate><creator>Gootjes-Dreesbach, Luise</creator><creator>Pickup, Lyndsey C</creator><creator>Fitzgibbon, Andrew W</creator><creator>Glennerster, Andrew</creator><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope></search><sort><creationdate>20170801</creationdate><title>Comparison of view-based and reconstruction-based models of human navigational strategy</title><author>Gootjes-Dreesbach, Luise ; Pickup, Lyndsey C ; Fitzgibbon, Andrew W ; Glennerster, Andrew</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c314t-90ba43c9b1852817b7e3903b37accfc333e31dca55479fa80b046a9b3a8670043</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2017</creationdate><topic>Adult</topic><topic>Environment</topic><topic>Humans</topic><topic>Likelihood Functions</topic><topic>Male</topic><topic>Models, Theoretical</topic><topic>Visual Perception - physiology</topic><topic>Young Adult</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Gootjes-Dreesbach, Luise</creatorcontrib><creatorcontrib>Pickup, Lyndsey C</creatorcontrib><creatorcontrib>Fitzgibbon, Andrew W</creatorcontrib><creatorcontrib>Glennerster, Andrew</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>Journal of vision (Charlottesville, Va.)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Gootjes-Dreesbach, Luise</au><au>Pickup, Lyndsey C</au><au>Fitzgibbon, Andrew W</au><au>Glennerster, Andrew</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Comparison of view-based and reconstruction-based models of human navigational strategy</atitle><jtitle>Journal of vision (Charlottesville, Va.)</jtitle><addtitle>J Vis</addtitle><date>2017-08-01</date><risdate>2017</risdate><volume>17</volume><issue>9</issue><spage>11</spage><epage>11</epage><pages>11-11</pages><issn>1534-7362</issn><eissn>1534-7362</eissn><abstract>There is good evidence that simple animals, such as bees, use view-based strategies to return to a familiar location, whereas humans might use a 3-D reconstruction to achieve the same goal. Assuming some noise in the storage and retrieval process, these two types of strategy give rise to different patterns of predicted errors in homing. We describe an experiment that can help distinguish between these models. Participants wore a head-mounted display to carry out a homing task in immersive virtual reality. They viewed three long, thin, vertical poles and had to remember where they were in relation to the poles before being transported (virtually) to a new location in the scene from where they had to walk back to the original location. The experiment was conducted in both a rich-cue scene (a furnished room) and a sparse scene (no background and no floor or ceiling). As one would expect, in a rich-cue environment, the overall error was smaller, and in this case, the ability to separate the models was reduced. However, for the sparse-cue environment, the view-based model outperforms the reconstruction-based model. Specifically, the likelihood of the experimental data is similar to the likelihood of samples drawn from the view-based model (but assessed under both models), and this is not true for samples drawn from the reconstruction-based model.</abstract><cop>United States</cop><pmid>28813567</pmid><doi>10.1167/17.9.11</doi><tpages>1</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1534-7362
ispartof Journal of vision (Charlottesville, Va.), 2017-08, Vol.17 (9), p.11-11
issn 1534-7362
1534-7362
language eng
recordid cdi_proquest_miscellaneous_1932142495
source MEDLINE; DOAJ Directory of Open Access Journals; EZB-FREE-00999 freely available EZB journals; PubMed Central
subjects Adult
Environment
Humans
Likelihood Functions
Male
Models, Theoretical
Visual Perception - physiology
Young Adult
title Comparison of view-based and reconstruction-based models of human navigational strategy
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-05T22%3A59%3A28IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Comparison%20of%20view-based%20and%20reconstruction-based%20models%20of%20human%20navigational%20strategy&rft.jtitle=Journal%20of%20vision%20(Charlottesville,%20Va.)&rft.au=Gootjes-Dreesbach,%20Luise&rft.date=2017-08-01&rft.volume=17&rft.issue=9&rft.spage=11&rft.epage=11&rft.pages=11-11&rft.issn=1534-7362&rft.eissn=1534-7362&rft_id=info:doi/10.1167/17.9.11&rft_dat=%3Cproquest_cross%3E1932142495%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1932142495&rft_id=info:pmid/28813567&rfr_iscdi=true