Toward Accurate Person-level Action Recognition in Videos of Crowded Scenes

Detecting and recognizing human action in videos with crowded scenes is a challenging problem due to the complex environment and diversity events. Prior works always fail to deal with this problem in two aspects: (1) lacking utilizing information of the scenes; (2) lacking training data in the crowd...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2020-10
Hauptverfasser: Li, Yuan, Zhou, Yichen, Chang, Shuning, Huang, Ziyuan, Chen, Yunpeng, Nie, Xuecheng, Wang, Tao, Feng, Jiashi, Shuicheng Yan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Li, Yuan
Zhou, Yichen
Chang, Shuning
Huang, Ziyuan
Chen, Yunpeng
Nie, Xuecheng
Wang, Tao
Feng, Jiashi
Shuicheng Yan
description Detecting and recognizing human action in videos with crowded scenes is a challenging problem due to the complex environment and diversity events. Prior works always fail to deal with this problem in two aspects: (1) lacking utilizing information of the scenes; (2) lacking training data in the crowd and complex scenes. In this paper, we focus on improving spatio-temporal action recognition by fully-utilizing the information of scenes and collecting new data. A top-down strategy is used to overcome the limitations. Specifically, we adopt a strong human detector to detect the spatial location of each frame. We then apply action recognition models to learn the spatio-temporal information from video frames on both the HIE dataset and new data with diverse scenes from the internet, which can improve the generalization ability of our model. Besides, the scenes information is extracted by the semantic segmentation model to assistant the process. As a result, our method achieved an average 26.05 wf\_mAP (ranking 1st place in the ACM MM grand challenge 2020: Human in Events).
doi_str_mv 10.48550/arxiv.2010.08365
format Article
fullrecord <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2010_08365</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2451921791</sourcerecordid><originalsourceid>FETCH-LOGICAL-a521-88a11db8aeab7fa27903cc19ec931d0c62aa4b3f227f927c2dd6603b3613a1ae3</originalsourceid><addsrcrecordid>eNotj1tLw0AUhBdBsNT-AJ9c8Dl192w2l8cSvBQLigZfw8nuiaTEbN1NWv33xtanGYZhmI-xKymWcaa1uEX_3e6XIKZAZCrRZ2wGSskoiwEu2CKErRACkhS0VjP2VLoDestXxoweB-Iv5IPro4721E3p0Lqev5JxH3179G3P31tLLnDX8MK7gyXL3wz1FC7ZeYNdoMW_zll5f1cWj9Hm-WFdrDYRaph-ZCilrTMkrNMGIc2FMkbmZHIlrTAJIMa1agDSJofUgLVJIlStEqlQIqk5uz7NHkmrnW8_0f9Uf8TVkXhq3JwaO---RgpDtXWj76dPFcRa5iDTXKpfJ8FYqA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2451921791</pqid></control><display><type>article</type><title>Toward Accurate Person-level Action Recognition in Videos of Crowded Scenes</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Li, Yuan ; Zhou, Yichen ; Chang, Shuning ; Huang, Ziyuan ; Chen, Yunpeng ; Nie, Xuecheng ; Wang, Tao ; Feng, Jiashi ; Shuicheng Yan</creator><creatorcontrib>Li, Yuan ; Zhou, Yichen ; Chang, Shuning ; Huang, Ziyuan ; Chen, Yunpeng ; Nie, Xuecheng ; Wang, Tao ; Feng, Jiashi ; Shuicheng Yan</creatorcontrib><description>Detecting and recognizing human action in videos with crowded scenes is a challenging problem due to the complex environment and diversity events. Prior works always fail to deal with this problem in two aspects: (1) lacking utilizing information of the scenes; (2) lacking training data in the crowd and complex scenes. In this paper, we focus on improving spatio-temporal action recognition by fully-utilizing the information of scenes and collecting new data. A top-down strategy is used to overcome the limitations. Specifically, we adopt a strong human detector to detect the spatial location of each frame. We then apply action recognition models to learn the spatio-temporal information from video frames on both the HIE dataset and new data with diverse scenes from the internet, which can improve the generalization ability of our model. Besides, the scenes information is extracted by the semantic segmentation model to assistant the process. As a result, our method achieved an average 26.05 wf\_mAP (ranking 1st place in the ACM MM grand challenge 2020: Human in Events).</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2010.08365</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Computer Science - Computer Vision and Pattern Recognition ; Recognition ; Semantic segmentation ; Video</subject><ispartof>arXiv.org, 2020-10</ispartof><rights>2020. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,784,885,27925</link.rule.ids><backlink>$$Uhttps://doi.org/10.48550/arXiv.2010.08365$$DView paper in arXiv$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.1145/3394171.3416301$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink></links><search><creatorcontrib>Li, Yuan</creatorcontrib><creatorcontrib>Zhou, Yichen</creatorcontrib><creatorcontrib>Chang, Shuning</creatorcontrib><creatorcontrib>Huang, Ziyuan</creatorcontrib><creatorcontrib>Chen, Yunpeng</creatorcontrib><creatorcontrib>Nie, Xuecheng</creatorcontrib><creatorcontrib>Wang, Tao</creatorcontrib><creatorcontrib>Feng, Jiashi</creatorcontrib><creatorcontrib>Shuicheng Yan</creatorcontrib><title>Toward Accurate Person-level Action Recognition in Videos of Crowded Scenes</title><title>arXiv.org</title><description>Detecting and recognizing human action in videos with crowded scenes is a challenging problem due to the complex environment and diversity events. Prior works always fail to deal with this problem in two aspects: (1) lacking utilizing information of the scenes; (2) lacking training data in the crowd and complex scenes. In this paper, we focus on improving spatio-temporal action recognition by fully-utilizing the information of scenes and collecting new data. A top-down strategy is used to overcome the limitations. Specifically, we adopt a strong human detector to detect the spatial location of each frame. We then apply action recognition models to learn the spatio-temporal information from video frames on both the HIE dataset and new data with diverse scenes from the internet, which can improve the generalization ability of our model. Besides, the scenes information is extracted by the semantic segmentation model to assistant the process. As a result, our method achieved an average 26.05 wf\_mAP (ranking 1st place in the ACM MM grand challenge 2020: Human in Events).</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Recognition</subject><subject>Semantic segmentation</subject><subject>Video</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GOX</sourceid><recordid>eNotj1tLw0AUhBdBsNT-AJ9c8Dl192w2l8cSvBQLigZfw8nuiaTEbN1NWv33xtanGYZhmI-xKymWcaa1uEX_3e6XIKZAZCrRZ2wGSskoiwEu2CKErRACkhS0VjP2VLoDestXxoweB-Iv5IPro4721E3p0Lqev5JxH3179G3P31tLLnDX8MK7gyXL3wz1FC7ZeYNdoMW_zll5f1cWj9Hm-WFdrDYRaph-ZCilrTMkrNMGIc2FMkbmZHIlrTAJIMa1agDSJofUgLVJIlStEqlQIqk5uz7NHkmrnW8_0f9Uf8TVkXhq3JwaO---RgpDtXWj76dPFcRa5iDTXKpfJ8FYqA</recordid><startdate>20201016</startdate><enddate>20201016</enddate><creator>Li, Yuan</creator><creator>Zhou, Yichen</creator><creator>Chang, Shuning</creator><creator>Huang, Ziyuan</creator><creator>Chen, Yunpeng</creator><creator>Nie, Xuecheng</creator><creator>Wang, Tao</creator><creator>Feng, Jiashi</creator><creator>Shuicheng Yan</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20201016</creationdate><title>Toward Accurate Person-level Action Recognition in Videos of Crowded Scenes</title><author>Li, Yuan ; Zhou, Yichen ; Chang, Shuning ; Huang, Ziyuan ; Chen, Yunpeng ; Nie, Xuecheng ; Wang, Tao ; Feng, Jiashi ; Shuicheng Yan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a521-88a11db8aeab7fa27903cc19ec931d0c62aa4b3f227f927c2dd6603b3613a1ae3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Recognition</topic><topic>Semantic segmentation</topic><topic>Video</topic><toplevel>online_resources</toplevel><creatorcontrib>Li, Yuan</creatorcontrib><creatorcontrib>Zhou, Yichen</creatorcontrib><creatorcontrib>Chang, Shuning</creatorcontrib><creatorcontrib>Huang, Ziyuan</creatorcontrib><creatorcontrib>Chen, Yunpeng</creatorcontrib><creatorcontrib>Nie, Xuecheng</creatorcontrib><creatorcontrib>Wang, Tao</creatorcontrib><creatorcontrib>Feng, Jiashi</creatorcontrib><creatorcontrib>Shuicheng Yan</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest Engineering Collection</collection><collection>ProQuest Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Li, Yuan</au><au>Zhou, Yichen</au><au>Chang, Shuning</au><au>Huang, Ziyuan</au><au>Chen, Yunpeng</au><au>Nie, Xuecheng</au><au>Wang, Tao</au><au>Feng, Jiashi</au><au>Shuicheng Yan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Toward Accurate Person-level Action Recognition in Videos of Crowded Scenes</atitle><jtitle>arXiv.org</jtitle><date>2020-10-16</date><risdate>2020</risdate><eissn>2331-8422</eissn><abstract>Detecting and recognizing human action in videos with crowded scenes is a challenging problem due to the complex environment and diversity events. Prior works always fail to deal with this problem in two aspects: (1) lacking utilizing information of the scenes; (2) lacking training data in the crowd and complex scenes. In this paper, we focus on improving spatio-temporal action recognition by fully-utilizing the information of scenes and collecting new data. A top-down strategy is used to overcome the limitations. Specifically, we adopt a strong human detector to detect the spatial location of each frame. We then apply action recognition models to learn the spatio-temporal information from video frames on both the HIE dataset and new data with diverse scenes from the internet, which can improve the generalization ability of our model. Besides, the scenes information is extracted by the semantic segmentation model to assistant the process. As a result, our method achieved an average 26.05 wf\_mAP (ranking 1st place in the ACM MM grand challenge 2020: Human in Events).</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2010.08365</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2020-10
issn 2331-8422
language eng
recordid cdi_arxiv_primary_2010_08365
source arXiv.org; Free E- Journals
subjects Computer Science - Computer Vision and Pattern Recognition
Recognition
Semantic segmentation
Video
title Toward Accurate Person-level Action Recognition in Videos of Crowded Scenes
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T18%3A50%3A26IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Toward%20Accurate%20Person-level%20Action%20Recognition%20in%20Videos%20of%20Crowded%20Scenes&rft.jtitle=arXiv.org&rft.au=Li,%20Yuan&rft.date=2020-10-16&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2010.08365&rft_dat=%3Cproquest_arxiv%3E2451921791%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2451921791&rft_id=info:pmid/&rfr_iscdi=true