No selective integration required: A race model explains responses to audiovisual motion-in-depth
Looming motion is an ecologically salient signal that often signifies danger. In both audition and vision, humans show behavioral biases in response to perceiving looming motion, which is suggested to indicate an adaptation for survival. However, it is an open question whether such biases occur also...
Gespeichert in:
Veröffentlicht in: | Cognition 2022-10, Vol.227, p.105204-105204, Article 105204 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 105204 |
---|---|
container_issue | |
container_start_page | 105204 |
container_title | Cognition |
container_volume | 227 |
creator | Chua, S.F. Andrew Liu, Yue Harris, Julie M. Otto, Thomas U. |
description | Looming motion is an ecologically salient signal that often signifies danger. In both audition and vision, humans show behavioral biases in response to perceiving looming motion, which is suggested to indicate an adaptation for survival. However, it is an open question whether such biases occur also in the combined processing of multisensory signals. Towards this aim, Cappe, Thut, Romei, and Murraya (2009) found that responses to audiovisual signals were faster for congruent looming motion compared to receding motion or incongruent combinations. They considered this as evidence for selective integration of multisensory looming signals. To test this proposal, here, we successfully replicate the behavioral results by Cappe et al. (2009). We then show that the redundant signals effect (RSE - a speedup of multisensory compared to unisensory responses) is not distinct for congruent looming motion. Instead, as predicted by a simple probability summation rule, the RSE is primarily modulated by the looming bias in audition, which suggests that multisensory processing inherits a unisensory effect. Finally, we compare a large set of so-called race models that implement probability summation, but that allow for interference between auditory and visual processing. The best-fitting model, selected by the Akaike Information Criterion (AIC), virtually perfectly explained the RSE across conditions with interference parameters that were either constant or varied only with auditory motion. In the absence of effects jointly caused by auditory and visual motion, we conclude that selective integration is not required to explain the behavioral benefits that occur with audiovisual looming motion. |
doi_str_mv | 10.1016/j.cognition.2022.105204 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_2681441694</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2681441694</sourcerecordid><originalsourceid>FETCH-LOGICAL-c367t-4651b610e509d548193929c9106c227b39a2883dd259b9a5254fb1f5acde26113</originalsourceid><addsrcrecordid>eNpdkE1LxDAQhoMouK7-BgNevHSdpE3TeFsWv2DRi55Dmk7XlG7TTdpF_71dVjx4Gph55uXlIeSawYIBy--ahfWbzg3OdwsOnE9bwSE7ITNWyDSRRVqckhkAgwS4lOfkIsYGADIuixkxr55GbNEObo_UdQNugjlk0YC70QWs7umSBmORbn2FLcWvvjWui9M99r6LGOngqRkr5_cujqaduMN_4rqkwn74vCRntWkjXv3OOfl4fHhfPSfrt6eX1XKd2DSXQ5LlgpU5AxSgKpEVTKWKK6sY5JZzWabK8KJIq4oLVSojuMjqktXC2Ap5zlg6J7fH3D743Yhx0FsXLbat6dCPUfO8YFnGcpVN6M0_tPFj6KZ2msupBwBnYqLkkbLBxxiw1n1wWxO-NQN9UK8b_adeH9Tro_r0BzqPekY</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2765100215</pqid></control><display><type>article</type><title>No selective integration required: A race model explains responses to audiovisual motion-in-depth</title><source>ScienceDirect Journals (5 years ago - present)</source><creator>Chua, S.F. Andrew ; Liu, Yue ; Harris, Julie M. ; Otto, Thomas U.</creator><creatorcontrib>Chua, S.F. Andrew ; Liu, Yue ; Harris, Julie M. ; Otto, Thomas U.</creatorcontrib><description>Looming motion is an ecologically salient signal that often signifies danger. In both audition and vision, humans show behavioral biases in response to perceiving looming motion, which is suggested to indicate an adaptation for survival. However, it is an open question whether such biases occur also in the combined processing of multisensory signals. Towards this aim, Cappe, Thut, Romei, and Murraya (2009) found that responses to audiovisual signals were faster for congruent looming motion compared to receding motion or incongruent combinations. They considered this as evidence for selective integration of multisensory looming signals. To test this proposal, here, we successfully replicate the behavioral results by Cappe et al. (2009). We then show that the redundant signals effect (RSE - a speedup of multisensory compared to unisensory responses) is not distinct for congruent looming motion. Instead, as predicted by a simple probability summation rule, the RSE is primarily modulated by the looming bias in audition, which suggests that multisensory processing inherits a unisensory effect. Finally, we compare a large set of so-called race models that implement probability summation, but that allow for interference between auditory and visual processing. The best-fitting model, selected by the Akaike Information Criterion (AIC), virtually perfectly explained the RSE across conditions with interference parameters that were either constant or varied only with auditory motion. In the absence of effects jointly caused by auditory and visual motion, we conclude that selective integration is not required to explain the behavioral benefits that occur with audiovisual looming motion.</description><identifier>ISSN: 0010-0277</identifier><identifier>EISSN: 1873-7838</identifier><identifier>DOI: 10.1016/j.cognition.2022.105204</identifier><language>eng</language><publisher>Lausanne: Elsevier Science Ltd</publisher><subject>Behavior ; Hearing ; Information processing ; Motion ; Motion detection ; Race ; Response bias ; Sensory integration ; Signal processing ; Visual processing</subject><ispartof>Cognition, 2022-10, Vol.227, p.105204-105204, Article 105204</ispartof><rights>Copyright Elsevier Science Ltd. Oct 2022</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c367t-4651b610e509d548193929c9106c227b39a2883dd259b9a5254fb1f5acde26113</citedby><cites>FETCH-LOGICAL-c367t-4651b610e509d548193929c9106c227b39a2883dd259b9a5254fb1f5acde26113</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27923,27924</link.rule.ids></links><search><creatorcontrib>Chua, S.F. Andrew</creatorcontrib><creatorcontrib>Liu, Yue</creatorcontrib><creatorcontrib>Harris, Julie M.</creatorcontrib><creatorcontrib>Otto, Thomas U.</creatorcontrib><title>No selective integration required: A race model explains responses to audiovisual motion-in-depth</title><title>Cognition</title><description>Looming motion is an ecologically salient signal that often signifies danger. In both audition and vision, humans show behavioral biases in response to perceiving looming motion, which is suggested to indicate an adaptation for survival. However, it is an open question whether such biases occur also in the combined processing of multisensory signals. Towards this aim, Cappe, Thut, Romei, and Murraya (2009) found that responses to audiovisual signals were faster for congruent looming motion compared to receding motion or incongruent combinations. They considered this as evidence for selective integration of multisensory looming signals. To test this proposal, here, we successfully replicate the behavioral results by Cappe et al. (2009). We then show that the redundant signals effect (RSE - a speedup of multisensory compared to unisensory responses) is not distinct for congruent looming motion. Instead, as predicted by a simple probability summation rule, the RSE is primarily modulated by the looming bias in audition, which suggests that multisensory processing inherits a unisensory effect. Finally, we compare a large set of so-called race models that implement probability summation, but that allow for interference between auditory and visual processing. The best-fitting model, selected by the Akaike Information Criterion (AIC), virtually perfectly explained the RSE across conditions with interference parameters that were either constant or varied only with auditory motion. In the absence of effects jointly caused by auditory and visual motion, we conclude that selective integration is not required to explain the behavioral benefits that occur with audiovisual looming motion.</description><subject>Behavior</subject><subject>Hearing</subject><subject>Information processing</subject><subject>Motion</subject><subject>Motion detection</subject><subject>Race</subject><subject>Response bias</subject><subject>Sensory integration</subject><subject>Signal processing</subject><subject>Visual processing</subject><issn>0010-0277</issn><issn>1873-7838</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><recordid>eNpdkE1LxDAQhoMouK7-BgNevHSdpE3TeFsWv2DRi55Dmk7XlG7TTdpF_71dVjx4Gph55uXlIeSawYIBy--ahfWbzg3OdwsOnE9bwSE7ITNWyDSRRVqckhkAgwS4lOfkIsYGADIuixkxr55GbNEObo_UdQNugjlk0YC70QWs7umSBmORbn2FLcWvvjWui9M99r6LGOngqRkr5_cujqaduMN_4rqkwn74vCRntWkjXv3OOfl4fHhfPSfrt6eX1XKd2DSXQ5LlgpU5AxSgKpEVTKWKK6sY5JZzWabK8KJIq4oLVSojuMjqktXC2Ap5zlg6J7fH3D743Yhx0FsXLbat6dCPUfO8YFnGcpVN6M0_tPFj6KZ2msupBwBnYqLkkbLBxxiw1n1wWxO-NQN9UK8b_adeH9Tro_r0BzqPekY</recordid><startdate>20221001</startdate><enddate>20221001</enddate><creator>Chua, S.F. Andrew</creator><creator>Liu, Yue</creator><creator>Harris, Julie M.</creator><creator>Otto, Thomas U.</creator><general>Elsevier Science Ltd</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7TK</scope><scope>8BJ</scope><scope>FQK</scope><scope>JBE</scope><scope>7X8</scope></search><sort><creationdate>20221001</creationdate><title>No selective integration required: A race model explains responses to audiovisual motion-in-depth</title><author>Chua, S.F. Andrew ; Liu, Yue ; Harris, Julie M. ; Otto, Thomas U.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c367t-4651b610e509d548193929c9106c227b39a2883dd259b9a5254fb1f5acde26113</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Behavior</topic><topic>Hearing</topic><topic>Information processing</topic><topic>Motion</topic><topic>Motion detection</topic><topic>Race</topic><topic>Response bias</topic><topic>Sensory integration</topic><topic>Signal processing</topic><topic>Visual processing</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Chua, S.F. Andrew</creatorcontrib><creatorcontrib>Liu, Yue</creatorcontrib><creatorcontrib>Harris, Julie M.</creatorcontrib><creatorcontrib>Otto, Thomas U.</creatorcontrib><collection>CrossRef</collection><collection>Neurosciences Abstracts</collection><collection>International Bibliography of the Social Sciences (IBSS)</collection><collection>International Bibliography of the Social Sciences</collection><collection>International Bibliography of the Social Sciences</collection><collection>MEDLINE - Academic</collection><jtitle>Cognition</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Chua, S.F. Andrew</au><au>Liu, Yue</au><au>Harris, Julie M.</au><au>Otto, Thomas U.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>No selective integration required: A race model explains responses to audiovisual motion-in-depth</atitle><jtitle>Cognition</jtitle><date>2022-10-01</date><risdate>2022</risdate><volume>227</volume><spage>105204</spage><epage>105204</epage><pages>105204-105204</pages><artnum>105204</artnum><issn>0010-0277</issn><eissn>1873-7838</eissn><abstract>Looming motion is an ecologically salient signal that often signifies danger. In both audition and vision, humans show behavioral biases in response to perceiving looming motion, which is suggested to indicate an adaptation for survival. However, it is an open question whether such biases occur also in the combined processing of multisensory signals. Towards this aim, Cappe, Thut, Romei, and Murraya (2009) found that responses to audiovisual signals were faster for congruent looming motion compared to receding motion or incongruent combinations. They considered this as evidence for selective integration of multisensory looming signals. To test this proposal, here, we successfully replicate the behavioral results by Cappe et al. (2009). We then show that the redundant signals effect (RSE - a speedup of multisensory compared to unisensory responses) is not distinct for congruent looming motion. Instead, as predicted by a simple probability summation rule, the RSE is primarily modulated by the looming bias in audition, which suggests that multisensory processing inherits a unisensory effect. Finally, we compare a large set of so-called race models that implement probability summation, but that allow for interference between auditory and visual processing. The best-fitting model, selected by the Akaike Information Criterion (AIC), virtually perfectly explained the RSE across conditions with interference parameters that were either constant or varied only with auditory motion. In the absence of effects jointly caused by auditory and visual motion, we conclude that selective integration is not required to explain the behavioral benefits that occur with audiovisual looming motion.</abstract><cop>Lausanne</cop><pub>Elsevier Science Ltd</pub><doi>10.1016/j.cognition.2022.105204</doi><tpages>1</tpages><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0010-0277 |
ispartof | Cognition, 2022-10, Vol.227, p.105204-105204, Article 105204 |
issn | 0010-0277 1873-7838 |
language | eng |
recordid | cdi_proquest_miscellaneous_2681441694 |
source | ScienceDirect Journals (5 years ago - present) |
subjects | Behavior Hearing Information processing Motion Motion detection Race Response bias Sensory integration Signal processing Visual processing |
title | No selective integration required: A race model explains responses to audiovisual motion-in-depth |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-08T14%3A05%3A35IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=No%20selective%20integration%20required:%20A%20race%20model%20explains%20responses%20to%20audiovisual%20motion-in-depth&rft.jtitle=Cognition&rft.au=Chua,%20S.F.%20Andrew&rft.date=2022-10-01&rft.volume=227&rft.spage=105204&rft.epage=105204&rft.pages=105204-105204&rft.artnum=105204&rft.issn=0010-0277&rft.eissn=1873-7838&rft_id=info:doi/10.1016/j.cognition.2022.105204&rft_dat=%3Cproquest_cross%3E2681441694%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2765100215&rft_id=info:pmid/&rfr_iscdi=true |