Point3D: tracking actions as moving points with 3D CNNs

Spatio-temporal action recognition has been a challenging task that involves detecting where and when actions occur. Current state-of-the-art action detectors are mostly anchor-based, requiring sensitive anchor designs and huge computations due to calculating large numbers of anchor boxes. Motivated...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Mo, Shentong, Xia, Jingfei, Tan, Xiaoqing, Raj, Bhiksha
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Mo, Shentong
Xia, Jingfei
Tan, Xiaoqing
Raj, Bhiksha
description Spatio-temporal action recognition has been a challenging task that involves detecting where and when actions occur. Current state-of-the-art action detectors are mostly anchor-based, requiring sensitive anchor designs and huge computations due to calculating large numbers of anchor boxes. Motivated by nascent anchor-free approaches, we propose Point3D, a flexible and computationally efficient network with high precision for spatio-temporal action recognition. Our Point3D consists of a Point Head for action localization and a 3D Head for action classification. Firstly, Point Head is used to track center points and knot key points of humans to localize the bounding box of an action. These location features are then piped into a time-wise attention to learn long-range dependencies across frames. The 3D Head is later deployed for the final action classification. Our Point3D achieves state-of-the-art performance on the JHMDB, UCF101-24, and AVA benchmarks in terms of frame-mAP and video-mAP. Comprehensive ablation studies also demonstrate the effectiveness of each module proposed in our Point3D.
doi_str_mv 10.48550/arxiv.2203.10584
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2203_10584</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2203_10584</sourcerecordid><originalsourceid>FETCH-LOGICAL-a674-59e03d514b1289ffdf53c49b490c05388d72f2c8a56a256b133e476b00ec2b133</originalsourceid><addsrcrecordid>eNotj81uwjAQhH3hUEEfoCf8AknXXm_icEOhfxKiHLhHGyduLSBBcUTbt69CexrNaDSaT4gHBamxRPDIw3e4ploDpgrImjuR7_vQjbhZyXFgdwzdh2Q3hr6LkqM899cpuUydKL_C-ClxI8vdLi7EzPMptvf_OheH56dD-Zps31_eyvU24Sw3CRUtYEPK1ErbwvvGEzpT1KYAB4TWNrn22lmmjDVltUJsTZ7VAK3Tk5uL5d_s7Xl1GcKZh59qIqhuBPgLaks-oA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Point3D: tracking actions as moving points with 3D CNNs</title><source>arXiv.org</source><creator>Mo, Shentong ; Xia, Jingfei ; Tan, Xiaoqing ; Raj, Bhiksha</creator><creatorcontrib>Mo, Shentong ; Xia, Jingfei ; Tan, Xiaoqing ; Raj, Bhiksha</creatorcontrib><description>Spatio-temporal action recognition has been a challenging task that involves detecting where and when actions occur. Current state-of-the-art action detectors are mostly anchor-based, requiring sensitive anchor designs and huge computations due to calculating large numbers of anchor boxes. Motivated by nascent anchor-free approaches, we propose Point3D, a flexible and computationally efficient network with high precision for spatio-temporal action recognition. Our Point3D consists of a Point Head for action localization and a 3D Head for action classification. Firstly, Point Head is used to track center points and knot key points of humans to localize the bounding box of an action. These location features are then piped into a time-wise attention to learn long-range dependencies across frames. The 3D Head is later deployed for the final action classification. Our Point3D achieves state-of-the-art performance on the JHMDB, UCF101-24, and AVA benchmarks in terms of frame-mAP and video-mAP. Comprehensive ablation studies also demonstrate the effectiveness of each module proposed in our Point3D.</description><identifier>DOI: 10.48550/arxiv.2203.10584</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2022-03</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,781,886</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2203.10584$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2203.10584$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Mo, Shentong</creatorcontrib><creatorcontrib>Xia, Jingfei</creatorcontrib><creatorcontrib>Tan, Xiaoqing</creatorcontrib><creatorcontrib>Raj, Bhiksha</creatorcontrib><title>Point3D: tracking actions as moving points with 3D CNNs</title><description>Spatio-temporal action recognition has been a challenging task that involves detecting where and when actions occur. Current state-of-the-art action detectors are mostly anchor-based, requiring sensitive anchor designs and huge computations due to calculating large numbers of anchor boxes. Motivated by nascent anchor-free approaches, we propose Point3D, a flexible and computationally efficient network with high precision for spatio-temporal action recognition. Our Point3D consists of a Point Head for action localization and a 3D Head for action classification. Firstly, Point Head is used to track center points and knot key points of humans to localize the bounding box of an action. These location features are then piped into a time-wise attention to learn long-range dependencies across frames. The 3D Head is later deployed for the final action classification. Our Point3D achieves state-of-the-art performance on the JHMDB, UCF101-24, and AVA benchmarks in terms of frame-mAP and video-mAP. Comprehensive ablation studies also demonstrate the effectiveness of each module proposed in our Point3D.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj81uwjAQhH3hUEEfoCf8AknXXm_icEOhfxKiHLhHGyduLSBBcUTbt69CexrNaDSaT4gHBamxRPDIw3e4ploDpgrImjuR7_vQjbhZyXFgdwzdh2Q3hr6LkqM899cpuUydKL_C-ClxI8vdLi7EzPMptvf_OheH56dD-Zps31_eyvU24Sw3CRUtYEPK1ErbwvvGEzpT1KYAB4TWNrn22lmmjDVltUJsTZ7VAK3Tk5uL5d_s7Xl1GcKZh59qIqhuBPgLaks-oA</recordid><startdate>20220320</startdate><enddate>20220320</enddate><creator>Mo, Shentong</creator><creator>Xia, Jingfei</creator><creator>Tan, Xiaoqing</creator><creator>Raj, Bhiksha</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220320</creationdate><title>Point3D: tracking actions as moving points with 3D CNNs</title><author>Mo, Shentong ; Xia, Jingfei ; Tan, Xiaoqing ; Raj, Bhiksha</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a674-59e03d514b1289ffdf53c49b490c05388d72f2c8a56a256b133e476b00ec2b133</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Mo, Shentong</creatorcontrib><creatorcontrib>Xia, Jingfei</creatorcontrib><creatorcontrib>Tan, Xiaoqing</creatorcontrib><creatorcontrib>Raj, Bhiksha</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Mo, Shentong</au><au>Xia, Jingfei</au><au>Tan, Xiaoqing</au><au>Raj, Bhiksha</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Point3D: tracking actions as moving points with 3D CNNs</atitle><date>2022-03-20</date><risdate>2022</risdate><abstract>Spatio-temporal action recognition has been a challenging task that involves detecting where and when actions occur. Current state-of-the-art action detectors are mostly anchor-based, requiring sensitive anchor designs and huge computations due to calculating large numbers of anchor boxes. Motivated by nascent anchor-free approaches, we propose Point3D, a flexible and computationally efficient network with high precision for spatio-temporal action recognition. Our Point3D consists of a Point Head for action localization and a 3D Head for action classification. Firstly, Point Head is used to track center points and knot key points of humans to localize the bounding box of an action. These location features are then piped into a time-wise attention to learn long-range dependencies across frames. The 3D Head is later deployed for the final action classification. Our Point3D achieves state-of-the-art performance on the JHMDB, UCF101-24, and AVA benchmarks in terms of frame-mAP and video-mAP. Comprehensive ablation studies also demonstrate the effectiveness of each module proposed in our Point3D.</abstract><doi>10.48550/arxiv.2203.10584</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2203.10584
ispartof
issn
language eng
recordid cdi_arxiv_primary_2203_10584
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title Point3D: tracking actions as moving points with 3D CNNs
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-15T04%3A22%3A05IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Point3D:%20tracking%20actions%20as%20moving%20points%20with%203D%20CNNs&rft.au=Mo,%20Shentong&rft.date=2022-03-20&rft_id=info:doi/10.48550/arxiv.2203.10584&rft_dat=%3Carxiv_GOX%3E2203_10584%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true