Wide-Area Crowd Counting: Multi-view Fusion Networks for Counting in Large Scenes
Crowd counting in single-view images has achieved outstanding performance on existing counting datasets. However, single-view counting is not applicable to large and wide scenes (e.g., public parks, long subway platforms, or event spaces) because a single camera cannot capture the whole scene in ade...
Gespeichert in:
Veröffentlicht in: | International journal of computer vision 2022-08, Vol.130 (8), p.1938-1960 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 1960 |
---|---|
container_issue | 8 |
container_start_page | 1938 |
container_title | International journal of computer vision |
container_volume | 130 |
creator | Zhang, Qi Chan, Antoni B. |
description | Crowd counting in single-view images has achieved outstanding performance on existing counting datasets. However, single-view counting is not applicable to large and wide scenes (e.g., public parks, long subway platforms, or event spaces) because a single camera cannot capture the whole scene in adequate detail for counting, e.g., when the scene is too large to fit into the field-of-view of the camera, too long so that the resolution is too low on faraway crowds, or when there are too many large objects that occlude large portions of the crowd. Therefore, to solve the wide-area counting task requires multiple cameras with overlapping fields-of-view. In this paper, we propose a deep neural network framework for multi-view crowd counting, which fuses information from multiple camera views to predict a scene-level density map on the ground-plane of the 3D world. We consider three versions of the fusion framework: the late fusion model fuses camera-view density map; the naïve early fusion model fuses camera-view feature maps; and the multi-view multi-scale early fusion model ensures that features aligned to the same ground-plane point have consistent scales. A rotation selection module further ensures consistent rotation alignment of the features. We test our 3 fusion models on 3 multi-view counting datasets, PETS2009, DukeMTMC, and a newly collected multi-view counting dataset containing a crowded street intersection. Our methods achieve state-of-the-art results compared to other multi-view counting baselines. |
doi_str_mv | 10.1007/s11263-022-01626-4 |
format | Article |
fullrecord | <record><control><sourceid>gale_proqu</sourceid><recordid>TN_cdi_proquest_journals_2689410642</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A710296165</galeid><sourcerecordid>A710296165</sourcerecordid><originalsourceid>FETCH-LOGICAL-c392t-e4bc6845499e697b215653e2176f4a400ce042c0fda14026bdaf4787ed063bc43</originalsourceid><addsrcrecordid>eNp9kUFPGzEQha2KSg1p_0BPlnrqwTD2er3Z3qIIaKQAghRxtBzv7MoQbGrvEvrvMSwq4oLmMNLoezNv9Aj5zuGAA1SHiXOhCgZCMOBKKCY_kQkvq4JxCeUemUAtgJWq5l_Ifko3ACBmopiQi2vXIJtHNHQRw66hizD43vnuFz0dtr1jDw539HhILnh6hv0uxNtE2xD_g9R5ujKxQ7q26DF9JZ9bs0347bVPydXx0Z_Fb7Y6P1ku5itmi1r0DOXGqpksZV2jqquN4KUqCxS8Uq00EsAiSGGhbUx-QahNY1pZzSpsQBUbK4sp-THuvY_h74Cp1zdhiD6f1ELNaslBSZGpg5HqzBa1823oo7G5GrxzNnhsXZ7PKw6iVjw7mJKf7wSZ6fGx78yQkl6uL9-zYmRtDClFbPV9dHcm_tMc9HMuesxF51z0Sy762XcxilKGfYfxzfcHqicgbI1P</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2689410642</pqid></control><display><type>article</type><title>Wide-Area Crowd Counting: Multi-view Fusion Networks for Counting in Large Scenes</title><source>SpringerNature Journals</source><creator>Zhang, Qi ; Chan, Antoni B.</creator><creatorcontrib>Zhang, Qi ; Chan, Antoni B.</creatorcontrib><description>Crowd counting in single-view images has achieved outstanding performance on existing counting datasets. However, single-view counting is not applicable to large and wide scenes (e.g., public parks, long subway platforms, or event spaces) because a single camera cannot capture the whole scene in adequate detail for counting, e.g., when the scene is too large to fit into the field-of-view of the camera, too long so that the resolution is too low on faraway crowds, or when there are too many large objects that occlude large portions of the crowd. Therefore, to solve the wide-area counting task requires multiple cameras with overlapping fields-of-view. In this paper, we propose a deep neural network framework for multi-view crowd counting, which fuses information from multiple camera views to predict a scene-level density map on the ground-plane of the 3D world. We consider three versions of the fusion framework: the late fusion model fuses camera-view density map; the naïve early fusion model fuses camera-view feature maps; and the multi-view multi-scale early fusion model ensures that features aligned to the same ground-plane point have consistent scales. A rotation selection module further ensures consistent rotation alignment of the features. We test our 3 fusion models on 3 multi-view counting datasets, PETS2009, DukeMTMC, and a newly collected multi-view counting dataset containing a crowded street intersection. Our methods achieve state-of-the-art results compared to other multi-view counting baselines.</description><identifier>ISSN: 0920-5691</identifier><identifier>EISSN: 1573-1405</identifier><identifier>DOI: 10.1007/s11263-022-01626-4</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Artificial Intelligence ; Artificial neural networks ; Cameras ; Computer Imaging ; Computer Science ; Datasets ; Density ; Feature maps ; Ground plane ; Image Processing and Computer Vision ; Neural networks ; Pattern Recognition ; Pattern Recognition and Graphics ; Rotation ; Vision</subject><ispartof>International journal of computer vision, 2022-08, Vol.130 (8), p.1938-1960</ispartof><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022</rights><rights>COPYRIGHT 2022 Springer</rights><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c392t-e4bc6845499e697b215653e2176f4a400ce042c0fda14026bdaf4787ed063bc43</citedby><cites>FETCH-LOGICAL-c392t-e4bc6845499e697b215653e2176f4a400ce042c0fda14026bdaf4787ed063bc43</cites><orcidid>0000-0001-6212-9799</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11263-022-01626-4$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11263-022-01626-4$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>315,782,786,27933,27934,41497,42566,51328</link.rule.ids></links><search><creatorcontrib>Zhang, Qi</creatorcontrib><creatorcontrib>Chan, Antoni B.</creatorcontrib><title>Wide-Area Crowd Counting: Multi-view Fusion Networks for Counting in Large Scenes</title><title>International journal of computer vision</title><addtitle>Int J Comput Vis</addtitle><description>Crowd counting in single-view images has achieved outstanding performance on existing counting datasets. However, single-view counting is not applicable to large and wide scenes (e.g., public parks, long subway platforms, or event spaces) because a single camera cannot capture the whole scene in adequate detail for counting, e.g., when the scene is too large to fit into the field-of-view of the camera, too long so that the resolution is too low on faraway crowds, or when there are too many large objects that occlude large portions of the crowd. Therefore, to solve the wide-area counting task requires multiple cameras with overlapping fields-of-view. In this paper, we propose a deep neural network framework for multi-view crowd counting, which fuses information from multiple camera views to predict a scene-level density map on the ground-plane of the 3D world. We consider three versions of the fusion framework: the late fusion model fuses camera-view density map; the naïve early fusion model fuses camera-view feature maps; and the multi-view multi-scale early fusion model ensures that features aligned to the same ground-plane point have consistent scales. A rotation selection module further ensures consistent rotation alignment of the features. We test our 3 fusion models on 3 multi-view counting datasets, PETS2009, DukeMTMC, and a newly collected multi-view counting dataset containing a crowded street intersection. Our methods achieve state-of-the-art results compared to other multi-view counting baselines.</description><subject>Artificial Intelligence</subject><subject>Artificial neural networks</subject><subject>Cameras</subject><subject>Computer Imaging</subject><subject>Computer Science</subject><subject>Datasets</subject><subject>Density</subject><subject>Feature maps</subject><subject>Ground plane</subject><subject>Image Processing and Computer Vision</subject><subject>Neural networks</subject><subject>Pattern Recognition</subject><subject>Pattern Recognition and Graphics</subject><subject>Rotation</subject><subject>Vision</subject><issn>0920-5691</issn><issn>1573-1405</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><recordid>eNp9kUFPGzEQha2KSg1p_0BPlnrqwTD2er3Z3qIIaKQAghRxtBzv7MoQbGrvEvrvMSwq4oLmMNLoezNv9Aj5zuGAA1SHiXOhCgZCMOBKKCY_kQkvq4JxCeUemUAtgJWq5l_Ifko3ACBmopiQi2vXIJtHNHQRw66hizD43vnuFz0dtr1jDw539HhILnh6hv0uxNtE2xD_g9R5ujKxQ7q26DF9JZ9bs0347bVPydXx0Z_Fb7Y6P1ku5itmi1r0DOXGqpksZV2jqquN4KUqCxS8Uq00EsAiSGGhbUx-QahNY1pZzSpsQBUbK4sp-THuvY_h74Cp1zdhiD6f1ELNaslBSZGpg5HqzBa1823oo7G5GrxzNnhsXZ7PKw6iVjw7mJKf7wSZ6fGx78yQkl6uL9-zYmRtDClFbPV9dHcm_tMc9HMuesxF51z0Sy762XcxilKGfYfxzfcHqicgbI1P</recordid><startdate>20220801</startdate><enddate>20220801</enddate><creator>Zhang, Qi</creator><creator>Chan, Antoni B.</creator><general>Springer US</general><general>Springer</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>ISR</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PYYUZ</scope><scope>Q9U</scope><orcidid>https://orcid.org/0000-0001-6212-9799</orcidid></search><sort><creationdate>20220801</creationdate><title>Wide-Area Crowd Counting: Multi-view Fusion Networks for Counting in Large Scenes</title><author>Zhang, Qi ; Chan, Antoni B.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c392t-e4bc6845499e697b215653e2176f4a400ce042c0fda14026bdaf4787ed063bc43</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Artificial Intelligence</topic><topic>Artificial neural networks</topic><topic>Cameras</topic><topic>Computer Imaging</topic><topic>Computer Science</topic><topic>Datasets</topic><topic>Density</topic><topic>Feature maps</topic><topic>Ground plane</topic><topic>Image Processing and Computer Vision</topic><topic>Neural networks</topic><topic>Pattern Recognition</topic><topic>Pattern Recognition and Graphics</topic><topic>Rotation</topic><topic>Vision</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Qi</creatorcontrib><creatorcontrib>Chan, Antoni B.</creatorcontrib><collection>CrossRef</collection><collection>Gale In Context: Science</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>Access via ABI/INFORM (ProQuest)</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>ProQuest One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ABI/INFORM Collection China</collection><collection>ProQuest Central Basic</collection><jtitle>International journal of computer vision</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zhang, Qi</au><au>Chan, Antoni B.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Wide-Area Crowd Counting: Multi-view Fusion Networks for Counting in Large Scenes</atitle><jtitle>International journal of computer vision</jtitle><stitle>Int J Comput Vis</stitle><date>2022-08-01</date><risdate>2022</risdate><volume>130</volume><issue>8</issue><spage>1938</spage><epage>1960</epage><pages>1938-1960</pages><issn>0920-5691</issn><eissn>1573-1405</eissn><abstract>Crowd counting in single-view images has achieved outstanding performance on existing counting datasets. However, single-view counting is not applicable to large and wide scenes (e.g., public parks, long subway platforms, or event spaces) because a single camera cannot capture the whole scene in adequate detail for counting, e.g., when the scene is too large to fit into the field-of-view of the camera, too long so that the resolution is too low on faraway crowds, or when there are too many large objects that occlude large portions of the crowd. Therefore, to solve the wide-area counting task requires multiple cameras with overlapping fields-of-view. In this paper, we propose a deep neural network framework for multi-view crowd counting, which fuses information from multiple camera views to predict a scene-level density map on the ground-plane of the 3D world. We consider three versions of the fusion framework: the late fusion model fuses camera-view density map; the naïve early fusion model fuses camera-view feature maps; and the multi-view multi-scale early fusion model ensures that features aligned to the same ground-plane point have consistent scales. A rotation selection module further ensures consistent rotation alignment of the features. We test our 3 fusion models on 3 multi-view counting datasets, PETS2009, DukeMTMC, and a newly collected multi-view counting dataset containing a crowded street intersection. Our methods achieve state-of-the-art results compared to other multi-view counting baselines.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s11263-022-01626-4</doi><tpages>23</tpages><orcidid>https://orcid.org/0000-0001-6212-9799</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0920-5691 |
ispartof | International journal of computer vision, 2022-08, Vol.130 (8), p.1938-1960 |
issn | 0920-5691 1573-1405 |
language | eng |
recordid | cdi_proquest_journals_2689410642 |
source | SpringerNature Journals |
subjects | Artificial Intelligence Artificial neural networks Cameras Computer Imaging Computer Science Datasets Density Feature maps Ground plane Image Processing and Computer Vision Neural networks Pattern Recognition Pattern Recognition and Graphics Rotation Vision |
title | Wide-Area Crowd Counting: Multi-view Fusion Networks for Counting in Large Scenes |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-01T22%3A08%3A29IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_proqu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Wide-Area%20Crowd%20Counting:%20Multi-view%20Fusion%20Networks%20for%20Counting%20in%20Large%20Scenes&rft.jtitle=International%20journal%20of%20computer%20vision&rft.au=Zhang,%20Qi&rft.date=2022-08-01&rft.volume=130&rft.issue=8&rft.spage=1938&rft.epage=1960&rft.pages=1938-1960&rft.issn=0920-5691&rft.eissn=1573-1405&rft_id=info:doi/10.1007/s11263-022-01626-4&rft_dat=%3Cgale_proqu%3EA710296165%3C/gale_proqu%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2689410642&rft_id=info:pmid/&rft_galeid=A710296165&rfr_iscdi=true |