Bayesian comparison of explicit and implicit causal inference strategies in multisensory heading perception

The precision of multisensory perception improves when cues arising from the same cause are integrated, such as visual and vestibular heading cues for an observer moving through a stationary environment. In order to determine how the cues should be processed, the brain must infer the causal relation...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:PLoS computational biology 2018-07, Vol.14 (7), p.e1006110-e1006110
Hauptverfasser: Acerbi, Luigi, Dokka, Kalpana, Angelaki, Dora E, Ma, Wei Ji
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page e1006110
container_issue 7
container_start_page e1006110
container_title PLoS computational biology
container_volume 14
creator Acerbi, Luigi
Dokka, Kalpana
Angelaki, Dora E
Ma, Wei Ji
description The precision of multisensory perception improves when cues arising from the same cause are integrated, such as visual and vestibular heading cues for an observer moving through a stationary environment. In order to determine how the cues should be processed, the brain must infer the causal relationship underlying the multisensory cues. In heading perception, however, it is unclear whether observers follow the Bayesian strategy, a simpler non-Bayesian heuristic, or even perform causal inference at all. We developed an efficient and robust computational framework to perform Bayesian model comparison of causal inference strategies, which incorporates a number of alternative assumptions about the observers. With this framework, we investigated whether human observers' performance in an explicit cause attribution and an implicit heading discrimination task can be modeled as a causal inference process. In the explicit causal inference task, all subjects accounted for cue disparity when reporting judgments of common cause, although not necessarily all in a Bayesian fashion. By contrast, but in agreement with previous findings, data from the heading discrimination task only could not rule out that several of the same observers were adopting a forced-fusion strategy, whereby cues are integrated regardless of disparity. Only when we combined evidence from both tasks we were able to rule out forced-fusion in the heading discrimination task. Crucially, findings were robust across a number of variants of models and analyses. Our results demonstrate that our proposed computational framework allows researchers to ask complex questions within a rigorous Bayesian framework that accounts for parameter and model uncertainty.
doi_str_mv 10.1371/journal.pcbi.1006110
format Article
fullrecord <record><control><sourceid>gale_plos_</sourceid><recordid>TN_cdi_plos_journals_2089346675</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A548630138</galeid><doaj_id>oai_doaj_org_article_7c753413a17d4cf5a73031f93a4dd19e</doaj_id><sourcerecordid>A548630138</sourcerecordid><originalsourceid>FETCH-LOGICAL-c633t-dbe8b9c3269effe04aa1223ca92ab77f9986da2b38955e6b8799b79a6ca723963</originalsourceid><addsrcrecordid>eNqVkk1v1DAQhiMEoqXwDxBE4gKHXew4tuMLUqn4WKkCiY-zNXEmqZfEDnaCuv8eL5tWXcQF-eDx-Jl37JnJsqeUrCmT9PXWz8FBvx5NbdeUEEEpuZedUs7ZSjJe3b9jn2SPYtwSkkwlHmYnjBBeiIKfZj_ewg6jBZcbP4wQbPQu922O12NvjZ1ycE1uh-VgYI7Q59a1GNAZzOMUYMLOYkzOfJj7yUZ00YddfoXQWNflIwaD42S9e5w9aKGP-GTZz7Lv7999u_i4uvz8YXNxfrkygrFp1dRY1cqwQihsWyQlAC0KZkAVUEvZKlWJBoqaVYpzFHUllaqlAmFAFkwJdpY9P-iOvY96qVPUBakUK4WQPBGbA9F42Oox2AHCTnuw-o_Dh05DmKzpUUuT-JIyoLIpTctBMsJoqxiUTUMVJq03S7a5HrAx6FJN-iPR4xtnr3Tnf2lBBCsJTQIvF4Hgf84YJz3YaLDvwaGf9--WFa-4IiyhL_5C__279YHqIH0gNcunvCatBgdrvMPWJv85LyvBUv4qBbw6CkjMhNdTl7od9ebrl_9gPx2z5YE1wccYsL2tCiV6P8Q3z9f7IdbLEKewZ3creht0M7XsNzuq79s</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2089346675</pqid></control><display><type>article</type><title>Bayesian comparison of explicit and implicit causal inference strategies in multisensory heading perception</title><source>MEDLINE</source><source>DOAJ Directory of Open Access Journals</source><source>Public Library of Science (PLoS) Journals Open Access</source><source>EZB-FREE-00999 freely available EZB journals</source><source>PubMed Central</source><creator>Acerbi, Luigi ; Dokka, Kalpana ; Angelaki, Dora E ; Ma, Wei Ji</creator><contributor>Gershman, Samuel J.</contributor><creatorcontrib>Acerbi, Luigi ; Dokka, Kalpana ; Angelaki, Dora E ; Ma, Wei Ji ; Gershman, Samuel J.</creatorcontrib><description>The precision of multisensory perception improves when cues arising from the same cause are integrated, such as visual and vestibular heading cues for an observer moving through a stationary environment. In order to determine how the cues should be processed, the brain must infer the causal relationship underlying the multisensory cues. In heading perception, however, it is unclear whether observers follow the Bayesian strategy, a simpler non-Bayesian heuristic, or even perform causal inference at all. We developed an efficient and robust computational framework to perform Bayesian model comparison of causal inference strategies, which incorporates a number of alternative assumptions about the observers. With this framework, we investigated whether human observers' performance in an explicit cause attribution and an implicit heading discrimination task can be modeled as a causal inference process. In the explicit causal inference task, all subjects accounted for cue disparity when reporting judgments of common cause, although not necessarily all in a Bayesian fashion. By contrast, but in agreement with previous findings, data from the heading discrimination task only could not rule out that several of the same observers were adopting a forced-fusion strategy, whereby cues are integrated regardless of disparity. Only when we combined evidence from both tasks we were able to rule out forced-fusion in the heading discrimination task. Crucially, findings were robust across a number of variants of models and analyses. Our results demonstrate that our proposed computational framework allows researchers to ask complex questions within a rigorous Bayesian framework that accounts for parameter and model uncertainty.</description><identifier>ISSN: 1553-7358</identifier><identifier>ISSN: 1553-734X</identifier><identifier>EISSN: 1553-7358</identifier><identifier>DOI: 10.1371/journal.pcbi.1006110</identifier><identifier>PMID: 30052625</identifier><language>eng</language><publisher>United States: Public Library of Science</publisher><subject>Adult ; Algorithms ; Bayes Theorem ; Bayesian analysis ; Biology and Life Sciences ; Brain ; Brain - physiology ; Computational neuroscience ; Cues ; Discrimination ; Discrimination (Psychology) ; Female ; Funding ; Human performance ; Humans ; Inference ; Judgments ; Male ; Mathematical models ; Models, Psychological ; Motion Perception ; Neurosciences ; Observers ; Parameter uncertainty ; Perception ; Perception (Psychology) ; Physical Sciences ; Physiological aspects ; Reproducibility of Results ; Research and Analysis Methods ; Segregation ; Senses ; Social Sciences ; Software ; Task Performance and Analysis ; Vestibular system ; Vestibule, Labyrinth - physiology ; Visual discrimination ; Visual observation ; Visual Perception ; Visual stimuli ; Young Adult</subject><ispartof>PLoS computational biology, 2018-07, Vol.14 (7), p.e1006110-e1006110</ispartof><rights>COPYRIGHT 2018 Public Library of Science</rights><rights>2018 Acerbi et al. This is an open access article distributed under the terms of the Creative Commons Attribution License: http://creativecommons.org/licenses/by/4.0/ (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>2018 Acerbi et al 2018 Acerbi et al</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c633t-dbe8b9c3269effe04aa1223ca92ab77f9986da2b38955e6b8799b79a6ca723963</citedby><cites>FETCH-LOGICAL-c633t-dbe8b9c3269effe04aa1223ca92ab77f9986da2b38955e6b8799b79a6ca723963</cites><orcidid>0000-0002-9835-9083 ; 0000-0001-7471-7336</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC6063401/pdf/$$EPDF$$P50$$Gpubmedcentral$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC6063401/$$EHTML$$P50$$Gpubmedcentral$$Hfree_for_read</linktohtml><link.rule.ids>230,314,727,780,784,864,885,2102,2928,23866,27924,27925,53791,53793</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/30052625$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><contributor>Gershman, Samuel J.</contributor><creatorcontrib>Acerbi, Luigi</creatorcontrib><creatorcontrib>Dokka, Kalpana</creatorcontrib><creatorcontrib>Angelaki, Dora E</creatorcontrib><creatorcontrib>Ma, Wei Ji</creatorcontrib><title>Bayesian comparison of explicit and implicit causal inference strategies in multisensory heading perception</title><title>PLoS computational biology</title><addtitle>PLoS Comput Biol</addtitle><description>The precision of multisensory perception improves when cues arising from the same cause are integrated, such as visual and vestibular heading cues for an observer moving through a stationary environment. In order to determine how the cues should be processed, the brain must infer the causal relationship underlying the multisensory cues. In heading perception, however, it is unclear whether observers follow the Bayesian strategy, a simpler non-Bayesian heuristic, or even perform causal inference at all. We developed an efficient and robust computational framework to perform Bayesian model comparison of causal inference strategies, which incorporates a number of alternative assumptions about the observers. With this framework, we investigated whether human observers' performance in an explicit cause attribution and an implicit heading discrimination task can be modeled as a causal inference process. In the explicit causal inference task, all subjects accounted for cue disparity when reporting judgments of common cause, although not necessarily all in a Bayesian fashion. By contrast, but in agreement with previous findings, data from the heading discrimination task only could not rule out that several of the same observers were adopting a forced-fusion strategy, whereby cues are integrated regardless of disparity. Only when we combined evidence from both tasks we were able to rule out forced-fusion in the heading discrimination task. Crucially, findings were robust across a number of variants of models and analyses. Our results demonstrate that our proposed computational framework allows researchers to ask complex questions within a rigorous Bayesian framework that accounts for parameter and model uncertainty.</description><subject>Adult</subject><subject>Algorithms</subject><subject>Bayes Theorem</subject><subject>Bayesian analysis</subject><subject>Biology and Life Sciences</subject><subject>Brain</subject><subject>Brain - physiology</subject><subject>Computational neuroscience</subject><subject>Cues</subject><subject>Discrimination</subject><subject>Discrimination (Psychology)</subject><subject>Female</subject><subject>Funding</subject><subject>Human performance</subject><subject>Humans</subject><subject>Inference</subject><subject>Judgments</subject><subject>Male</subject><subject>Mathematical models</subject><subject>Models, Psychological</subject><subject>Motion Perception</subject><subject>Neurosciences</subject><subject>Observers</subject><subject>Parameter uncertainty</subject><subject>Perception</subject><subject>Perception (Psychology)</subject><subject>Physical Sciences</subject><subject>Physiological aspects</subject><subject>Reproducibility of Results</subject><subject>Research and Analysis Methods</subject><subject>Segregation</subject><subject>Senses</subject><subject>Social Sciences</subject><subject>Software</subject><subject>Task Performance and Analysis</subject><subject>Vestibular system</subject><subject>Vestibule, Labyrinth - physiology</subject><subject>Visual discrimination</subject><subject>Visual observation</subject><subject>Visual Perception</subject><subject>Visual stimuli</subject><subject>Young Adult</subject><issn>1553-7358</issn><issn>1553-734X</issn><issn>1553-7358</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><sourceid>DOA</sourceid><recordid>eNqVkk1v1DAQhiMEoqXwDxBE4gKHXew4tuMLUqn4WKkCiY-zNXEmqZfEDnaCuv8eL5tWXcQF-eDx-Jl37JnJsqeUrCmT9PXWz8FBvx5NbdeUEEEpuZedUs7ZSjJe3b9jn2SPYtwSkkwlHmYnjBBeiIKfZj_ewg6jBZcbP4wQbPQu922O12NvjZ1ycE1uh-VgYI7Q59a1GNAZzOMUYMLOYkzOfJj7yUZ00YddfoXQWNflIwaD42S9e5w9aKGP-GTZz7Lv7999u_i4uvz8YXNxfrkygrFp1dRY1cqwQihsWyQlAC0KZkAVUEvZKlWJBoqaVYpzFHUllaqlAmFAFkwJdpY9P-iOvY96qVPUBakUK4WQPBGbA9F42Oox2AHCTnuw-o_Dh05DmKzpUUuT-JIyoLIpTctBMsJoqxiUTUMVJq03S7a5HrAx6FJN-iPR4xtnr3Tnf2lBBCsJTQIvF4Hgf84YJz3YaLDvwaGf9--WFa-4IiyhL_5C__279YHqIH0gNcunvCatBgdrvMPWJv85LyvBUv4qBbw6CkjMhNdTl7od9ebrl_9gPx2z5YE1wccYsL2tCiV6P8Q3z9f7IdbLEKewZ3creht0M7XsNzuq79s</recordid><startdate>20180701</startdate><enddate>20180701</enddate><creator>Acerbi, Luigi</creator><creator>Dokka, Kalpana</creator><creator>Angelaki, Dora E</creator><creator>Ma, Wei Ji</creator><general>Public Library of Science</general><general>Public Library of Science (PLoS)</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>ISN</scope><scope>ISR</scope><scope>3V.</scope><scope>7QO</scope><scope>7QP</scope><scope>7TK</scope><scope>7TM</scope><scope>7X7</scope><scope>7XB</scope><scope>88E</scope><scope>8AL</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FH</scope><scope>8FI</scope><scope>8FJ</scope><scope>8FK</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BBNVY</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>BHPHI</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FR3</scope><scope>FYUFA</scope><scope>GHDGH</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>K9.</scope><scope>LK8</scope><scope>M0N</scope><scope>M0S</scope><scope>M1P</scope><scope>M7P</scope><scope>P5Z</scope><scope>P62</scope><scope>P64</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>Q9U</scope><scope>RC3</scope><scope>7X8</scope><scope>5PM</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0002-9835-9083</orcidid><orcidid>https://orcid.org/0000-0001-7471-7336</orcidid></search><sort><creationdate>20180701</creationdate><title>Bayesian comparison of explicit and implicit causal inference strategies in multisensory heading perception</title><author>Acerbi, Luigi ; Dokka, Kalpana ; Angelaki, Dora E ; Ma, Wei Ji</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c633t-dbe8b9c3269effe04aa1223ca92ab77f9986da2b38955e6b8799b79a6ca723963</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Adult</topic><topic>Algorithms</topic><topic>Bayes Theorem</topic><topic>Bayesian analysis</topic><topic>Biology and Life Sciences</topic><topic>Brain</topic><topic>Brain - physiology</topic><topic>Computational neuroscience</topic><topic>Cues</topic><topic>Discrimination</topic><topic>Discrimination (Psychology)</topic><topic>Female</topic><topic>Funding</topic><topic>Human performance</topic><topic>Humans</topic><topic>Inference</topic><topic>Judgments</topic><topic>Male</topic><topic>Mathematical models</topic><topic>Models, Psychological</topic><topic>Motion Perception</topic><topic>Neurosciences</topic><topic>Observers</topic><topic>Parameter uncertainty</topic><topic>Perception</topic><topic>Perception (Psychology)</topic><topic>Physical Sciences</topic><topic>Physiological aspects</topic><topic>Reproducibility of Results</topic><topic>Research and Analysis Methods</topic><topic>Segregation</topic><topic>Senses</topic><topic>Social Sciences</topic><topic>Software</topic><topic>Task Performance and Analysis</topic><topic>Vestibular system</topic><topic>Vestibule, Labyrinth - physiology</topic><topic>Visual discrimination</topic><topic>Visual observation</topic><topic>Visual Perception</topic><topic>Visual stimuli</topic><topic>Young Adult</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Acerbi, Luigi</creatorcontrib><creatorcontrib>Dokka, Kalpana</creatorcontrib><creatorcontrib>Angelaki, Dora E</creatorcontrib><creatorcontrib>Ma, Wei Ji</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Gale In Context: Canada</collection><collection>Gale In Context: Science</collection><collection>ProQuest Central (Corporate)</collection><collection>Biotechnology Research Abstracts</collection><collection>Calcium &amp; Calcified Tissue Abstracts</collection><collection>Neurosciences Abstracts</collection><collection>Nucleic Acids Abstracts</collection><collection>Health &amp; Medical Collection</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Medical Database (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Natural Science Collection</collection><collection>Hospital Premium Collection</collection><collection>Hospital Premium Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>Biological Science Collection</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>Natural Science Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Engineering Research Database</collection><collection>Health Research Premium Collection</collection><collection>Health Research Premium Collection (Alumni)</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>ProQuest Health &amp; Medical Complete (Alumni)</collection><collection>ProQuest Biological Science Collection</collection><collection>Computing Database</collection><collection>Health &amp; Medical Collection (Alumni Edition)</collection><collection>Medical Database</collection><collection>Biological Science Database</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>Access via ProQuest (Open Access)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ProQuest Central Basic</collection><collection>Genetics Abstracts</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>PLoS computational biology</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Acerbi, Luigi</au><au>Dokka, Kalpana</au><au>Angelaki, Dora E</au><au>Ma, Wei Ji</au><au>Gershman, Samuel J.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Bayesian comparison of explicit and implicit causal inference strategies in multisensory heading perception</atitle><jtitle>PLoS computational biology</jtitle><addtitle>PLoS Comput Biol</addtitle><date>2018-07-01</date><risdate>2018</risdate><volume>14</volume><issue>7</issue><spage>e1006110</spage><epage>e1006110</epage><pages>e1006110-e1006110</pages><issn>1553-7358</issn><issn>1553-734X</issn><eissn>1553-7358</eissn><abstract>The precision of multisensory perception improves when cues arising from the same cause are integrated, such as visual and vestibular heading cues for an observer moving through a stationary environment. In order to determine how the cues should be processed, the brain must infer the causal relationship underlying the multisensory cues. In heading perception, however, it is unclear whether observers follow the Bayesian strategy, a simpler non-Bayesian heuristic, or even perform causal inference at all. We developed an efficient and robust computational framework to perform Bayesian model comparison of causal inference strategies, which incorporates a number of alternative assumptions about the observers. With this framework, we investigated whether human observers' performance in an explicit cause attribution and an implicit heading discrimination task can be modeled as a causal inference process. In the explicit causal inference task, all subjects accounted for cue disparity when reporting judgments of common cause, although not necessarily all in a Bayesian fashion. By contrast, but in agreement with previous findings, data from the heading discrimination task only could not rule out that several of the same observers were adopting a forced-fusion strategy, whereby cues are integrated regardless of disparity. Only when we combined evidence from both tasks we were able to rule out forced-fusion in the heading discrimination task. Crucially, findings were robust across a number of variants of models and analyses. Our results demonstrate that our proposed computational framework allows researchers to ask complex questions within a rigorous Bayesian framework that accounts for parameter and model uncertainty.</abstract><cop>United States</cop><pub>Public Library of Science</pub><pmid>30052625</pmid><doi>10.1371/journal.pcbi.1006110</doi><orcidid>https://orcid.org/0000-0002-9835-9083</orcidid><orcidid>https://orcid.org/0000-0001-7471-7336</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1553-7358
ispartof PLoS computational biology, 2018-07, Vol.14 (7), p.e1006110-e1006110
issn 1553-7358
1553-734X
1553-7358
language eng
recordid cdi_plos_journals_2089346675
source MEDLINE; DOAJ Directory of Open Access Journals; Public Library of Science (PLoS) Journals Open Access; EZB-FREE-00999 freely available EZB journals; PubMed Central
subjects Adult
Algorithms
Bayes Theorem
Bayesian analysis
Biology and Life Sciences
Brain
Brain - physiology
Computational neuroscience
Cues
Discrimination
Discrimination (Psychology)
Female
Funding
Human performance
Humans
Inference
Judgments
Male
Mathematical models
Models, Psychological
Motion Perception
Neurosciences
Observers
Parameter uncertainty
Perception
Perception (Psychology)
Physical Sciences
Physiological aspects
Reproducibility of Results
Research and Analysis Methods
Segregation
Senses
Social Sciences
Software
Task Performance and Analysis
Vestibular system
Vestibule, Labyrinth - physiology
Visual discrimination
Visual observation
Visual Perception
Visual stimuli
Young Adult
title Bayesian comparison of explicit and implicit causal inference strategies in multisensory heading perception
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-18T19%3A38%3A31IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_plos_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Bayesian%20comparison%20of%20explicit%20and%20implicit%20causal%20inference%20strategies%20in%20multisensory%20heading%20perception&rft.jtitle=PLoS%20computational%20biology&rft.au=Acerbi,%20Luigi&rft.date=2018-07-01&rft.volume=14&rft.issue=7&rft.spage=e1006110&rft.epage=e1006110&rft.pages=e1006110-e1006110&rft.issn=1553-7358&rft.eissn=1553-7358&rft_id=info:doi/10.1371/journal.pcbi.1006110&rft_dat=%3Cgale_plos_%3EA548630138%3C/gale_plos_%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2089346675&rft_id=info:pmid/30052625&rft_galeid=A548630138&rft_doaj_id=oai_doaj_org_article_7c753413a17d4cf5a73031f93a4dd19e&rfr_iscdi=true