How do humans group non‐rigid objects in multiple object tracking?: Evidence from grouping by self‐rotation

Previous studies on perceptual grouping found that people can use spatiotemporal and featural information to group spatially separated rigid objects into a unit while tracking moving objects. However, few studies have tested the role of objects’ self‐motion information in perceptual grouping, althou...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:The British journal of psychology 2022-08, Vol.113 (3), p.653-676
Hauptverfasser: Hu, Luming, Zhao, Chen, Wei, Liuqing, Talhelm, Thomas, Wang, Chundi, Zhang, Xuemin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 676
container_issue 3
container_start_page 653
container_title The British journal of psychology
container_volume 113
creator Hu, Luming
Zhao, Chen
Wei, Liuqing
Talhelm, Thomas
Wang, Chundi
Zhang, Xuemin
description Previous studies on perceptual grouping found that people can use spatiotemporal and featural information to group spatially separated rigid objects into a unit while tracking moving objects. However, few studies have tested the role of objects’ self‐motion information in perceptual grouping, although it is of great significance to the motion perception in the three‐dimensional space. In natural environments, objects always move in translation and rotation at the same time. The self‐rotation of the objects seriously destroys objects’ rigidity and topology, creates conflicting movement signals and results in crowding effects. Thus, this study sought to examine the specific role played by self‐rotation information on grouping spatially separated non‐rigid objects through a modified multiple object tracking (MOT) paradigm with self‐rotating objects. Experiment 1 found that people could use self‐rotation information to group spatially separated non‐rigid objects, even though this information was deleterious for attentive tracking and irrelevant to the task requirements, and people seemed to use it strategically rather than automatically. Experiment 2 provided stronger evidence that this grouping advantage did come from the self‐rotation per se rather than surface‐level cues arising from self‐rotation (e.g. similar 2D motion signals and common shapes). Experiment 3 changed the stimuli to more natural 3D cubes to strengthen the impression of self‐rotation and again found that self‐rotation improved grouping. Finally, Experiment 4 demonstrated that grouping by self‐rotation and grouping by changing shape were statistically comparable but additive, suggesting that they were two different sources of the object information. Thus, grouping by self‐rotation mainly benefited from the perceptual differences in motion flow fields rather than in deformation. Overall, this study is the first attempt to identify self‐motion as a new feature that people can use to group objects in dynamic scenes and shed light on debates about what entities/units we group and what kinds of information about a target we process while tracking objects.
doi_str_mv 10.1111/bjop.12547
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_2611652044</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2688043867</sourcerecordid><originalsourceid>FETCH-LOGICAL-c3167-f3757a0d238000a618acba7d38f0bed5a20f8e9dd7a64385dc9097299a8cd5bb3</originalsourceid><addsrcrecordid>eNp9kU1OJCEYhonRaI-68QATEjdmklKgfgA3xjH-xkQXuq5QQPXQVkEJVXZ6N0fwCJ7Fo3gSacuZhQu_DcnHkyfwvgDsYLSP4xxUM9ftY5JndAVMCMqyhBGer4IJQogmmBR8A_wIYYYQxpzydbCRZpzgDOEJ6C7cHCoH_wytsAFOvRs6aJ19-_vszdQo6KqZln2AxsJ2aHrTNfr1ZVzC3gv5YOz06BCePhmlrdSw9q4dNfECVgsYdFMvba4XvXF2C6zVogl6-_PcBPdnp3cnF8n1zfnlyfF1IlNc0KROaU4FUiRl8ReiwEzISlCVshpVWuWCoJpprhQVRZayXEmOOCWcCyZVXlXpJtgbvZ13j4MOfdmaIHXTCKvdEEpSYFzky7QiuvsFnbnB2_i6SDGGor-gkfo1UtK7ELyuy86bVvhFiVG57KFc9lB-9BDhn5_KoWq1-o_-Cz4CeATmptGLb1Tl76ub21H6DgJTljw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2688043867</pqid></control><display><type>article</type><title>How do humans group non‐rigid objects in multiple object tracking?: Evidence from grouping by self‐rotation</title><source>Wiley Online Library - AutoHoldings Journals</source><source>EBSCOhost Business Source Complete</source><source>Applied Social Sciences Index &amp; Abstracts (ASSIA)</source><creator>Hu, Luming ; Zhao, Chen ; Wei, Liuqing ; Talhelm, Thomas ; Wang, Chundi ; Zhang, Xuemin</creator><creatorcontrib>Hu, Luming ; Zhao, Chen ; Wei, Liuqing ; Talhelm, Thomas ; Wang, Chundi ; Zhang, Xuemin</creatorcontrib><description>Previous studies on perceptual grouping found that people can use spatiotemporal and featural information to group spatially separated rigid objects into a unit while tracking moving objects. However, few studies have tested the role of objects’ self‐motion information in perceptual grouping, although it is of great significance to the motion perception in the three‐dimensional space. In natural environments, objects always move in translation and rotation at the same time. The self‐rotation of the objects seriously destroys objects’ rigidity and topology, creates conflicting movement signals and results in crowding effects. Thus, this study sought to examine the specific role played by self‐rotation information on grouping spatially separated non‐rigid objects through a modified multiple object tracking (MOT) paradigm with self‐rotating objects. Experiment 1 found that people could use self‐rotation information to group spatially separated non‐rigid objects, even though this information was deleterious for attentive tracking and irrelevant to the task requirements, and people seemed to use it strategically rather than automatically. Experiment 2 provided stronger evidence that this grouping advantage did come from the self‐rotation per se rather than surface‐level cues arising from self‐rotation (e.g. similar 2D motion signals and common shapes). Experiment 3 changed the stimuli to more natural 3D cubes to strengthen the impression of self‐rotation and again found that self‐rotation improved grouping. Finally, Experiment 4 demonstrated that grouping by self‐rotation and grouping by changing shape were statistically comparable but additive, suggesting that they were two different sources of the object information. Thus, grouping by self‐rotation mainly benefited from the perceptual differences in motion flow fields rather than in deformation. Overall, this study is the first attempt to identify self‐motion as a new feature that people can use to group objects in dynamic scenes and shed light on debates about what entities/units we group and what kinds of information about a target we process while tracking objects.</description><identifier>ISSN: 0007-1269</identifier><identifier>EISSN: 2044-8295</identifier><identifier>DOI: 10.1111/bjop.12547</identifier><identifier>PMID: 34921401</identifier><language>eng</language><publisher>England: British Psychological Society</publisher><subject>additivity ; common fate ; Crowding ; Cues ; Experiments ; grouping ; Motion ; Moving objects ; multiple object tracking ; Natural environment ; non‐rigid ; Perceptual grouping ; Rotation ; self‐rotation ; Tracking ; Translation</subject><ispartof>The British journal of psychology, 2022-08, Vol.113 (3), p.653-676</ispartof><rights>2021 The British Psychological Society</rights><rights>2021 The British Psychological Society.</rights><rights>Copyright © 2022 The British Psychological Society</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c3167-f3757a0d238000a618acba7d38f0bed5a20f8e9dd7a64385dc9097299a8cd5bb3</cites><orcidid>0000-0003-3604-4905 ; 0000-0002-5357-6666 ; 0000-0002-0954-5758 ; 0000-0001-6488-7454</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://onlinelibrary.wiley.com/doi/pdf/10.1111%2Fbjop.12547$$EPDF$$P50$$Gwiley$$H</linktopdf><linktohtml>$$Uhttps://onlinelibrary.wiley.com/doi/full/10.1111%2Fbjop.12547$$EHTML$$P50$$Gwiley$$H</linktohtml><link.rule.ids>314,780,784,1417,27924,27925,30999,45574,45575</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/34921401$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Hu, Luming</creatorcontrib><creatorcontrib>Zhao, Chen</creatorcontrib><creatorcontrib>Wei, Liuqing</creatorcontrib><creatorcontrib>Talhelm, Thomas</creatorcontrib><creatorcontrib>Wang, Chundi</creatorcontrib><creatorcontrib>Zhang, Xuemin</creatorcontrib><title>How do humans group non‐rigid objects in multiple object tracking?: Evidence from grouping by self‐rotation</title><title>The British journal of psychology</title><addtitle>Br J Psychol</addtitle><description>Previous studies on perceptual grouping found that people can use spatiotemporal and featural information to group spatially separated rigid objects into a unit while tracking moving objects. However, few studies have tested the role of objects’ self‐motion information in perceptual grouping, although it is of great significance to the motion perception in the three‐dimensional space. In natural environments, objects always move in translation and rotation at the same time. The self‐rotation of the objects seriously destroys objects’ rigidity and topology, creates conflicting movement signals and results in crowding effects. Thus, this study sought to examine the specific role played by self‐rotation information on grouping spatially separated non‐rigid objects through a modified multiple object tracking (MOT) paradigm with self‐rotating objects. Experiment 1 found that people could use self‐rotation information to group spatially separated non‐rigid objects, even though this information was deleterious for attentive tracking and irrelevant to the task requirements, and people seemed to use it strategically rather than automatically. Experiment 2 provided stronger evidence that this grouping advantage did come from the self‐rotation per se rather than surface‐level cues arising from self‐rotation (e.g. similar 2D motion signals and common shapes). Experiment 3 changed the stimuli to more natural 3D cubes to strengthen the impression of self‐rotation and again found that self‐rotation improved grouping. Finally, Experiment 4 demonstrated that grouping by self‐rotation and grouping by changing shape were statistically comparable but additive, suggesting that they were two different sources of the object information. Thus, grouping by self‐rotation mainly benefited from the perceptual differences in motion flow fields rather than in deformation. Overall, this study is the first attempt to identify self‐motion as a new feature that people can use to group objects in dynamic scenes and shed light on debates about what entities/units we group and what kinds of information about a target we process while tracking objects.</description><subject>additivity</subject><subject>common fate</subject><subject>Crowding</subject><subject>Cues</subject><subject>Experiments</subject><subject>grouping</subject><subject>Motion</subject><subject>Moving objects</subject><subject>multiple object tracking</subject><subject>Natural environment</subject><subject>non‐rigid</subject><subject>Perceptual grouping</subject><subject>Rotation</subject><subject>self‐rotation</subject><subject>Tracking</subject><subject>Translation</subject><issn>0007-1269</issn><issn>2044-8295</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>7QJ</sourceid><recordid>eNp9kU1OJCEYhonRaI-68QATEjdmklKgfgA3xjH-xkQXuq5QQPXQVkEJVXZ6N0fwCJ7Fo3gSacuZhQu_DcnHkyfwvgDsYLSP4xxUM9ftY5JndAVMCMqyhBGer4IJQogmmBR8A_wIYYYQxpzydbCRZpzgDOEJ6C7cHCoH_wytsAFOvRs6aJ19-_vszdQo6KqZln2AxsJ2aHrTNfr1ZVzC3gv5YOz06BCePhmlrdSw9q4dNfECVgsYdFMvba4XvXF2C6zVogl6-_PcBPdnp3cnF8n1zfnlyfF1IlNc0KROaU4FUiRl8ReiwEzISlCVshpVWuWCoJpprhQVRZayXEmOOCWcCyZVXlXpJtgbvZ13j4MOfdmaIHXTCKvdEEpSYFzky7QiuvsFnbnB2_i6SDGGor-gkfo1UtK7ELyuy86bVvhFiVG57KFc9lB-9BDhn5_KoWq1-o_-Cz4CeATmptGLb1Tl76ub21H6DgJTljw</recordid><startdate>202208</startdate><enddate>202208</enddate><creator>Hu, Luming</creator><creator>Zhao, Chen</creator><creator>Wei, Liuqing</creator><creator>Talhelm, Thomas</creator><creator>Wang, Chundi</creator><creator>Zhang, Xuemin</creator><general>British Psychological Society</general><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7QJ</scope><scope>8BJ</scope><scope>FQK</scope><scope>JBE</scope><scope>K9.</scope><scope>NAPCQ</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0003-3604-4905</orcidid><orcidid>https://orcid.org/0000-0002-5357-6666</orcidid><orcidid>https://orcid.org/0000-0002-0954-5758</orcidid><orcidid>https://orcid.org/0000-0001-6488-7454</orcidid></search><sort><creationdate>202208</creationdate><title>How do humans group non‐rigid objects in multiple object tracking?: Evidence from grouping by self‐rotation</title><author>Hu, Luming ; Zhao, Chen ; Wei, Liuqing ; Talhelm, Thomas ; Wang, Chundi ; Zhang, Xuemin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c3167-f3757a0d238000a618acba7d38f0bed5a20f8e9dd7a64385dc9097299a8cd5bb3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>additivity</topic><topic>common fate</topic><topic>Crowding</topic><topic>Cues</topic><topic>Experiments</topic><topic>grouping</topic><topic>Motion</topic><topic>Moving objects</topic><topic>multiple object tracking</topic><topic>Natural environment</topic><topic>non‐rigid</topic><topic>Perceptual grouping</topic><topic>Rotation</topic><topic>self‐rotation</topic><topic>Tracking</topic><topic>Translation</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Hu, Luming</creatorcontrib><creatorcontrib>Zhao, Chen</creatorcontrib><creatorcontrib>Wei, Liuqing</creatorcontrib><creatorcontrib>Talhelm, Thomas</creatorcontrib><creatorcontrib>Wang, Chundi</creatorcontrib><creatorcontrib>Zhang, Xuemin</creatorcontrib><collection>PubMed</collection><collection>CrossRef</collection><collection>Applied Social Sciences Index &amp; Abstracts (ASSIA)</collection><collection>International Bibliography of the Social Sciences (IBSS)</collection><collection>International Bibliography of the Social Sciences</collection><collection>International Bibliography of the Social Sciences</collection><collection>ProQuest Health &amp; Medical Complete (Alumni)</collection><collection>Nursing &amp; Allied Health Premium</collection><collection>MEDLINE - Academic</collection><jtitle>The British journal of psychology</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Hu, Luming</au><au>Zhao, Chen</au><au>Wei, Liuqing</au><au>Talhelm, Thomas</au><au>Wang, Chundi</au><au>Zhang, Xuemin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>How do humans group non‐rigid objects in multiple object tracking?: Evidence from grouping by self‐rotation</atitle><jtitle>The British journal of psychology</jtitle><addtitle>Br J Psychol</addtitle><date>2022-08</date><risdate>2022</risdate><volume>113</volume><issue>3</issue><spage>653</spage><epage>676</epage><pages>653-676</pages><issn>0007-1269</issn><eissn>2044-8295</eissn><abstract>Previous studies on perceptual grouping found that people can use spatiotemporal and featural information to group spatially separated rigid objects into a unit while tracking moving objects. However, few studies have tested the role of objects’ self‐motion information in perceptual grouping, although it is of great significance to the motion perception in the three‐dimensional space. In natural environments, objects always move in translation and rotation at the same time. The self‐rotation of the objects seriously destroys objects’ rigidity and topology, creates conflicting movement signals and results in crowding effects. Thus, this study sought to examine the specific role played by self‐rotation information on grouping spatially separated non‐rigid objects through a modified multiple object tracking (MOT) paradigm with self‐rotating objects. Experiment 1 found that people could use self‐rotation information to group spatially separated non‐rigid objects, even though this information was deleterious for attentive tracking and irrelevant to the task requirements, and people seemed to use it strategically rather than automatically. Experiment 2 provided stronger evidence that this grouping advantage did come from the self‐rotation per se rather than surface‐level cues arising from self‐rotation (e.g. similar 2D motion signals and common shapes). Experiment 3 changed the stimuli to more natural 3D cubes to strengthen the impression of self‐rotation and again found that self‐rotation improved grouping. Finally, Experiment 4 demonstrated that grouping by self‐rotation and grouping by changing shape were statistically comparable but additive, suggesting that they were two different sources of the object information. Thus, grouping by self‐rotation mainly benefited from the perceptual differences in motion flow fields rather than in deformation. Overall, this study is the first attempt to identify self‐motion as a new feature that people can use to group objects in dynamic scenes and shed light on debates about what entities/units we group and what kinds of information about a target we process while tracking objects.</abstract><cop>England</cop><pub>British Psychological Society</pub><pmid>34921401</pmid><doi>10.1111/bjop.12547</doi><tpages>24</tpages><orcidid>https://orcid.org/0000-0003-3604-4905</orcidid><orcidid>https://orcid.org/0000-0002-5357-6666</orcidid><orcidid>https://orcid.org/0000-0002-0954-5758</orcidid><orcidid>https://orcid.org/0000-0001-6488-7454</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0007-1269
ispartof The British journal of psychology, 2022-08, Vol.113 (3), p.653-676
issn 0007-1269
2044-8295
language eng
recordid cdi_proquest_miscellaneous_2611652044
source Wiley Online Library - AutoHoldings Journals; EBSCOhost Business Source Complete; Applied Social Sciences Index & Abstracts (ASSIA)
subjects additivity
common fate
Crowding
Cues
Experiments
grouping
Motion
Moving objects
multiple object tracking
Natural environment
non‐rigid
Perceptual grouping
Rotation
self‐rotation
Tracking
Translation
title How do humans group non‐rigid objects in multiple object tracking?: Evidence from grouping by self‐rotation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-03T02%3A43%3A37IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=How%20do%20humans%20group%20non%E2%80%90rigid%20objects%20in%20multiple%C2%A0object%20tracking?:%20Evidence%20from%20grouping%20by%20self%E2%80%90rotation&rft.jtitle=The%20British%20journal%20of%20psychology&rft.au=Hu,%20Luming&rft.date=2022-08&rft.volume=113&rft.issue=3&rft.spage=653&rft.epage=676&rft.pages=653-676&rft.issn=0007-1269&rft.eissn=2044-8295&rft_id=info:doi/10.1111/bjop.12547&rft_dat=%3Cproquest_cross%3E2688043867%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2688043867&rft_id=info:pmid/34921401&rfr_iscdi=true