Vision-based collective motion: A locust-inspired reductionist model

Naturally occurring collective motion is a fascinating phenomenon in which swarming individuals aggregate and coordinate their motion. Many theoretical models of swarming assume idealized, perfect perceptual capabilities, and ignore the underlying perception processes, particularly for agents relyin...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:PLoS computational biology 2024-01, Vol.20 (1), p.e1011796-e1011796
Hauptverfasser: Krongauz, David L, Ayali, Amir, Kaminka, Gal A
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page e1011796
container_issue 1
container_start_page e1011796
container_title PLoS computational biology
container_volume 20
creator Krongauz, David L
Ayali, Amir
Kaminka, Gal A
description Naturally occurring collective motion is a fascinating phenomenon in which swarming individuals aggregate and coordinate their motion. Many theoretical models of swarming assume idealized, perfect perceptual capabilities, and ignore the underlying perception processes, particularly for agents relying on visual perception. Specifically, biological vision in many swarming animals, such as locusts, utilizes monocular non-stereoscopic vision, which prevents perfect acquisition of distances and velocities. Moreover, swarming peers can visually occlude each other, further introducing estimation errors. In this study, we explore necessary conditions for the emergence of ordered collective motion under restricted conditions, using non-stereoscopic, monocular vision. We present a model of vision-based collective motion for locust-like agents: elongated shape, omni-directional visual sensor parallel to the horizontal plane, and lacking stereoscopic depth perception. The model addresses (i) the non-stereoscopic estimation of distance and velocity, (ii) the presence of occlusions in the visual field. We consider and compare three strategies that an agent may use to interpret partially-occluded visual information at the cost of the computational complexity required for the visual perception processes. Computer-simulated experiments conducted in various geometrical environments (toroidal, corridor, and ring-shaped arenas) demonstrate that the models can result in an ordered or near-ordered state. At the same time, they differ in the rate at which order is achieved. Moreover, the results are sensitive to the elongation of the agents. Experiments in geometrically constrained environments reveal differences between the models and elucidate possible tradeoffs in using them to control swarming agents. These suggest avenues for further study in biology and robotics.
doi_str_mv 10.1371/journal.pcbi.1011796
format Article
fullrecord <record><control><sourceid>gale_plos_</sourceid><recordid>TN_cdi_plos_journals_3069179670</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A781247420</galeid><doaj_id>oai_doaj_org_article_764762b310114872a49a910bfaecf29f</doaj_id><sourcerecordid>A781247420</sourcerecordid><originalsourceid>FETCH-LOGICAL-c564t-55d904f4cdbcbce01b64b09515e635b74c4fd3d7e72f279a52683903b7b110073</originalsourceid><addsrcrecordid>eNqVkkuLFDEQxxtR3HX1G4gOeNFDj3mnsxcZ1tfAouDrGpJ0MmbIdMYkvei3N-30LjviRUJIUfWrf6WKaprHECwh5vDlNo5pUGG5N9ovIYCQC3anOYWU4pZj2t29ZZ80D3LeAlBNwe43J7hDHeWQnTavv_ns49BqlW2_MDEEa4q_sotdLNV_vlgtQjRjLq0f8t6nCtU7minoc6lYb8PD5p5TIdtH83vWfH375svF-_by47v1xeqyNZSR0lLaC0AcMb022lgANSMaCAqpZZhqTgxxPe655cghLhRFrMMCYM01hABwfNY8PejuQ8xyHkCWGDAxdc9BJdYHoo9qK_fJ71T6JaPy8o8jpo1UqXgTrOSMcIY0nkZHOo4UEUpAoJ2yxiHhqtarudqod7Y3dihJhSPR48jgv8tNvJIQdBRhQqrC81khxR-jzUXufDY2BDXYOGaJBAKw47XPij77C_13e8sDtVG1Az-4WAubenq78yYO1vnqX_EOIsIJmhJeHCVUptifZaPGnOX686f_YD8cs-TAmhRzTtbdjAUCOa3n9ffltJ5yXs-a9uT2SG-SrvcR_wah19_1</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3069179670</pqid></control><display><type>article</type><title>Vision-based collective motion: A locust-inspired reductionist model</title><source>Public Library of Science (PLoS) Journals Open Access</source><source>MEDLINE</source><source>DOAJ Directory of Open Access Journals</source><source>EZB-FREE-00999 freely available EZB journals</source><source>PubMed Central</source><creator>Krongauz, David L ; Ayali, Amir ; Kaminka, Gal A</creator><contributor>Martinez-Garcia, Ricardo</contributor><creatorcontrib>Krongauz, David L ; Ayali, Amir ; Kaminka, Gal A ; Martinez-Garcia, Ricardo</creatorcontrib><description>Naturally occurring collective motion is a fascinating phenomenon in which swarming individuals aggregate and coordinate their motion. Many theoretical models of swarming assume idealized, perfect perceptual capabilities, and ignore the underlying perception processes, particularly for agents relying on visual perception. Specifically, biological vision in many swarming animals, such as locusts, utilizes monocular non-stereoscopic vision, which prevents perfect acquisition of distances and velocities. Moreover, swarming peers can visually occlude each other, further introducing estimation errors. In this study, we explore necessary conditions for the emergence of ordered collective motion under restricted conditions, using non-stereoscopic, monocular vision. We present a model of vision-based collective motion for locust-like agents: elongated shape, omni-directional visual sensor parallel to the horizontal plane, and lacking stereoscopic depth perception. The model addresses (i) the non-stereoscopic estimation of distance and velocity, (ii) the presence of occlusions in the visual field. We consider and compare three strategies that an agent may use to interpret partially-occluded visual information at the cost of the computational complexity required for the visual perception processes. Computer-simulated experiments conducted in various geometrical environments (toroidal, corridor, and ring-shaped arenas) demonstrate that the models can result in an ordered or near-ordered state. At the same time, they differ in the rate at which order is achieved. Moreover, the results are sensitive to the elongation of the agents. Experiments in geometrically constrained environments reveal differences between the models and elucidate possible tradeoffs in using them to control swarming agents. These suggest avenues for further study in biology and robotics.</description><identifier>ISSN: 1553-7358</identifier><identifier>ISSN: 1553-734X</identifier><identifier>EISSN: 1553-7358</identifier><identifier>DOI: 10.1371/journal.pcbi.1011796</identifier><identifier>PMID: 38285716</identifier><language>eng</language><publisher>United States: Public Library of Science</publisher><subject>Animals ; Biology and Life Sciences ; Computer Simulation ; Control algorithms ; Decision making ; Depth Perception ; Engineering and Technology ; Grasshoppers ; Herding behavior in animals ; Humans ; Locusts ; Machine vision ; Models, Theoretical ; Monocular vision ; Morphology ; Motion ; Motion Perception ; Perception ; Physical Sciences ; Robotics ; Sensors ; Social Sciences ; Space perception ; Stereoscopic vision ; Swarming ; Velocity ; Vision, Ocular ; Visual field ; Visual fields ; Visual perception</subject><ispartof>PLoS computational biology, 2024-01, Vol.20 (1), p.e1011796-e1011796</ispartof><rights>Copyright: © 2024 Krongauz et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.</rights><rights>COPYRIGHT 2024 Public Library of Science</rights><rights>2024 Krongauz et al. This is an open access article distributed under the terms of the Creative Commons Attribution License: http://creativecommons.org/licenses/by/4.0/ (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>2024 Krongauz et al 2024 Krongauz et al</rights><rights>2024 Krongauz et al. This is an open access article distributed under the terms of the Creative Commons Attribution License: http://creativecommons.org/licenses/by/4.0/ (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c564t-55d904f4cdbcbce01b64b09515e635b74c4fd3d7e72f279a52683903b7b110073</citedby><cites>FETCH-LOGICAL-c564t-55d904f4cdbcbce01b64b09515e635b74c4fd3d7e72f279a52683903b7b110073</cites><orcidid>0000-0003-4017-1122 ; 0000-0001-6152-2927</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC10852344/pdf/$$EPDF$$P50$$Gpubmedcentral$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC10852344/$$EHTML$$P50$$Gpubmedcentral$$Hfree_for_read</linktohtml><link.rule.ids>230,314,723,776,780,860,881,2096,2915,23845,27901,27902,53766,53768,79343,79344</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/38285716$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><contributor>Martinez-Garcia, Ricardo</contributor><creatorcontrib>Krongauz, David L</creatorcontrib><creatorcontrib>Ayali, Amir</creatorcontrib><creatorcontrib>Kaminka, Gal A</creatorcontrib><title>Vision-based collective motion: A locust-inspired reductionist model</title><title>PLoS computational biology</title><addtitle>PLoS Comput Biol</addtitle><description>Naturally occurring collective motion is a fascinating phenomenon in which swarming individuals aggregate and coordinate their motion. Many theoretical models of swarming assume idealized, perfect perceptual capabilities, and ignore the underlying perception processes, particularly for agents relying on visual perception. Specifically, biological vision in many swarming animals, such as locusts, utilizes monocular non-stereoscopic vision, which prevents perfect acquisition of distances and velocities. Moreover, swarming peers can visually occlude each other, further introducing estimation errors. In this study, we explore necessary conditions for the emergence of ordered collective motion under restricted conditions, using non-stereoscopic, monocular vision. We present a model of vision-based collective motion for locust-like agents: elongated shape, omni-directional visual sensor parallel to the horizontal plane, and lacking stereoscopic depth perception. The model addresses (i) the non-stereoscopic estimation of distance and velocity, (ii) the presence of occlusions in the visual field. We consider and compare three strategies that an agent may use to interpret partially-occluded visual information at the cost of the computational complexity required for the visual perception processes. Computer-simulated experiments conducted in various geometrical environments (toroidal, corridor, and ring-shaped arenas) demonstrate that the models can result in an ordered or near-ordered state. At the same time, they differ in the rate at which order is achieved. Moreover, the results are sensitive to the elongation of the agents. Experiments in geometrically constrained environments reveal differences between the models and elucidate possible tradeoffs in using them to control swarming agents. These suggest avenues for further study in biology and robotics.</description><subject>Animals</subject><subject>Biology and Life Sciences</subject><subject>Computer Simulation</subject><subject>Control algorithms</subject><subject>Decision making</subject><subject>Depth Perception</subject><subject>Engineering and Technology</subject><subject>Grasshoppers</subject><subject>Herding behavior in animals</subject><subject>Humans</subject><subject>Locusts</subject><subject>Machine vision</subject><subject>Models, Theoretical</subject><subject>Monocular vision</subject><subject>Morphology</subject><subject>Motion</subject><subject>Motion Perception</subject><subject>Perception</subject><subject>Physical Sciences</subject><subject>Robotics</subject><subject>Sensors</subject><subject>Social Sciences</subject><subject>Space perception</subject><subject>Stereoscopic vision</subject><subject>Swarming</subject><subject>Velocity</subject><subject>Vision, Ocular</subject><subject>Visual field</subject><subject>Visual fields</subject><subject>Visual perception</subject><issn>1553-7358</issn><issn>1553-734X</issn><issn>1553-7358</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><sourceid>BENPR</sourceid><sourceid>DOA</sourceid><recordid>eNqVkkuLFDEQxxtR3HX1G4gOeNFDj3mnsxcZ1tfAouDrGpJ0MmbIdMYkvei3N-30LjviRUJIUfWrf6WKaprHECwh5vDlNo5pUGG5N9ovIYCQC3anOYWU4pZj2t29ZZ80D3LeAlBNwe43J7hDHeWQnTavv_ns49BqlW2_MDEEa4q_sotdLNV_vlgtQjRjLq0f8t6nCtU7minoc6lYb8PD5p5TIdtH83vWfH375svF-_by47v1xeqyNZSR0lLaC0AcMb022lgANSMaCAqpZZhqTgxxPe655cghLhRFrMMCYM01hABwfNY8PejuQ8xyHkCWGDAxdc9BJdYHoo9qK_fJ71T6JaPy8o8jpo1UqXgTrOSMcIY0nkZHOo4UEUpAoJ2yxiHhqtarudqod7Y3dihJhSPR48jgv8tNvJIQdBRhQqrC81khxR-jzUXufDY2BDXYOGaJBAKw47XPij77C_13e8sDtVG1Az-4WAubenq78yYO1vnqX_EOIsIJmhJeHCVUptifZaPGnOX686f_YD8cs-TAmhRzTtbdjAUCOa3n9ffltJ5yXs-a9uT2SG-SrvcR_wah19_1</recordid><startdate>20240101</startdate><enddate>20240101</enddate><creator>Krongauz, David L</creator><creator>Ayali, Amir</creator><creator>Kaminka, Gal A</creator><general>Public Library of Science</general><general>Public Library of Science (PLoS)</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>ISN</scope><scope>ISR</scope><scope>3V.</scope><scope>7QO</scope><scope>7QP</scope><scope>7TK</scope><scope>7TM</scope><scope>7X7</scope><scope>7XB</scope><scope>88E</scope><scope>8AL</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FH</scope><scope>8FI</scope><scope>8FJ</scope><scope>8FK</scope><scope>ABUWG</scope><scope>AEUYN</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BBNVY</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>BHPHI</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FR3</scope><scope>FYUFA</scope><scope>GHDGH</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>K9.</scope><scope>LK8</scope><scope>M0N</scope><scope>M0S</scope><scope>M1P</scope><scope>M7P</scope><scope>P5Z</scope><scope>P62</scope><scope>P64</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>Q9U</scope><scope>RC3</scope><scope>7X8</scope><scope>5PM</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0003-4017-1122</orcidid><orcidid>https://orcid.org/0000-0001-6152-2927</orcidid></search><sort><creationdate>20240101</creationdate><title>Vision-based collective motion: A locust-inspired reductionist model</title><author>Krongauz, David L ; Ayali, Amir ; Kaminka, Gal A</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c564t-55d904f4cdbcbce01b64b09515e635b74c4fd3d7e72f279a52683903b7b110073</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Animals</topic><topic>Biology and Life Sciences</topic><topic>Computer Simulation</topic><topic>Control algorithms</topic><topic>Decision making</topic><topic>Depth Perception</topic><topic>Engineering and Technology</topic><topic>Grasshoppers</topic><topic>Herding behavior in animals</topic><topic>Humans</topic><topic>Locusts</topic><topic>Machine vision</topic><topic>Models, Theoretical</topic><topic>Monocular vision</topic><topic>Morphology</topic><topic>Motion</topic><topic>Motion Perception</topic><topic>Perception</topic><topic>Physical Sciences</topic><topic>Robotics</topic><topic>Sensors</topic><topic>Social Sciences</topic><topic>Space perception</topic><topic>Stereoscopic vision</topic><topic>Swarming</topic><topic>Velocity</topic><topic>Vision, Ocular</topic><topic>Visual field</topic><topic>Visual fields</topic><topic>Visual perception</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Krongauz, David L</creatorcontrib><creatorcontrib>Ayali, Amir</creatorcontrib><creatorcontrib>Kaminka, Gal A</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Gale In Context: Canada</collection><collection>Gale In Context: Science</collection><collection>ProQuest Central (Corporate)</collection><collection>Biotechnology Research Abstracts</collection><collection>Calcium &amp; Calcified Tissue Abstracts</collection><collection>Neurosciences Abstracts</collection><collection>Nucleic Acids Abstracts</collection><collection>Health &amp; Medical Collection</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Medical Database (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Natural Science Collection</collection><collection>Hospital Premium Collection</collection><collection>Hospital Premium Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest One Sustainability</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>Biological Science Collection</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>Natural Science Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Engineering Research Database</collection><collection>Health Research Premium Collection</collection><collection>Health Research Premium Collection (Alumni)</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>ProQuest Health &amp; Medical Complete (Alumni)</collection><collection>ProQuest Biological Science Collection</collection><collection>Computing Database</collection><collection>Health &amp; Medical Collection (Alumni Edition)</collection><collection>Medical Database</collection><collection>Biological Science Database</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ProQuest Central Basic</collection><collection>Genetics Abstracts</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>PLoS computational biology</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Krongauz, David L</au><au>Ayali, Amir</au><au>Kaminka, Gal A</au><au>Martinez-Garcia, Ricardo</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Vision-based collective motion: A locust-inspired reductionist model</atitle><jtitle>PLoS computational biology</jtitle><addtitle>PLoS Comput Biol</addtitle><date>2024-01-01</date><risdate>2024</risdate><volume>20</volume><issue>1</issue><spage>e1011796</spage><epage>e1011796</epage><pages>e1011796-e1011796</pages><issn>1553-7358</issn><issn>1553-734X</issn><eissn>1553-7358</eissn><abstract>Naturally occurring collective motion is a fascinating phenomenon in which swarming individuals aggregate and coordinate their motion. Many theoretical models of swarming assume idealized, perfect perceptual capabilities, and ignore the underlying perception processes, particularly for agents relying on visual perception. Specifically, biological vision in many swarming animals, such as locusts, utilizes monocular non-stereoscopic vision, which prevents perfect acquisition of distances and velocities. Moreover, swarming peers can visually occlude each other, further introducing estimation errors. In this study, we explore necessary conditions for the emergence of ordered collective motion under restricted conditions, using non-stereoscopic, monocular vision. We present a model of vision-based collective motion for locust-like agents: elongated shape, omni-directional visual sensor parallel to the horizontal plane, and lacking stereoscopic depth perception. The model addresses (i) the non-stereoscopic estimation of distance and velocity, (ii) the presence of occlusions in the visual field. We consider and compare three strategies that an agent may use to interpret partially-occluded visual information at the cost of the computational complexity required for the visual perception processes. Computer-simulated experiments conducted in various geometrical environments (toroidal, corridor, and ring-shaped arenas) demonstrate that the models can result in an ordered or near-ordered state. At the same time, they differ in the rate at which order is achieved. Moreover, the results are sensitive to the elongation of the agents. Experiments in geometrically constrained environments reveal differences between the models and elucidate possible tradeoffs in using them to control swarming agents. These suggest avenues for further study in biology and robotics.</abstract><cop>United States</cop><pub>Public Library of Science</pub><pmid>38285716</pmid><doi>10.1371/journal.pcbi.1011796</doi><tpages>e1011796</tpages><orcidid>https://orcid.org/0000-0003-4017-1122</orcidid><orcidid>https://orcid.org/0000-0001-6152-2927</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1553-7358
ispartof PLoS computational biology, 2024-01, Vol.20 (1), p.e1011796-e1011796
issn 1553-7358
1553-734X
1553-7358
language eng
recordid cdi_plos_journals_3069179670
source Public Library of Science (PLoS) Journals Open Access; MEDLINE; DOAJ Directory of Open Access Journals; EZB-FREE-00999 freely available EZB journals; PubMed Central
subjects Animals
Biology and Life Sciences
Computer Simulation
Control algorithms
Decision making
Depth Perception
Engineering and Technology
Grasshoppers
Herding behavior in animals
Humans
Locusts
Machine vision
Models, Theoretical
Monocular vision
Morphology
Motion
Motion Perception
Perception
Physical Sciences
Robotics
Sensors
Social Sciences
Space perception
Stereoscopic vision
Swarming
Velocity
Vision, Ocular
Visual field
Visual fields
Visual perception
title Vision-based collective motion: A locust-inspired reductionist model
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-28T23%3A02%3A51IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_plos_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Vision-based%20collective%20motion:%20A%20locust-inspired%20reductionist%20model&rft.jtitle=PLoS%20computational%20biology&rft.au=Krongauz,%20David%20L&rft.date=2024-01-01&rft.volume=20&rft.issue=1&rft.spage=e1011796&rft.epage=e1011796&rft.pages=e1011796-e1011796&rft.issn=1553-7358&rft.eissn=1553-7358&rft_id=info:doi/10.1371/journal.pcbi.1011796&rft_dat=%3Cgale_plos_%3EA781247420%3C/gale_plos_%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3069179670&rft_id=info:pmid/38285716&rft_galeid=A781247420&rft_doaj_id=oai_doaj_org_article_764762b310114872a49a910bfaecf29f&rfr_iscdi=true