CuDi3D: Curvilinear displacement based approach for online 3D action detection

•A step by step approach that addresses the OAD problem in a transparent manner.•A novel and robust online segmentation paradigm for skeleton-based action detection.•Addressing temporal variability with curvilinear windows.•Addressing inter-class spatial variability using multiple classifiers.•Addre...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Computer vision and image understanding 2018-09, Vol.174, p.57-69
Hauptverfasser: Boulahia, Said Yacine, Anquetil, Eric, Multon, Franck, Kulpa, Richard
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 69
container_issue
container_start_page 57
container_title Computer vision and image understanding
container_volume 174
creator Boulahia, Said Yacine
Anquetil, Eric
Multon, Franck
Kulpa, Richard
description •A step by step approach that addresses the OAD problem in a transparent manner.•A novel and robust online segmentation paradigm for skeleton-based action detection.•Addressing temporal variability with curvilinear windows.•Addressing inter-class spatial variability using multiple classifiers.•Addressing intra-class spatial variability with a decision fusion system.•Better experimental results reported on four skeleton-based benchmarks. Being able to interactively detect and recognize 3D actions based on skeleton data, in unsegmented streams, has become an important computer vision topic. It raises three scientific problems in relation with variability. The first one is the temporal variability that occurs when subjects perform gestures with different speeds. The second one is the inter-class spatial variability, which refers to disparities between the displacement amounts induced by different classes (i.e. long vs. short movements). The last one is the intra-class spatial variability caused by differences in style and gesture amplitude. In this paper, we design an original approach that better considers these three issues. To address temporal variability we introduce the notion of curvilinear segmentation. It consists in extracting features, not on temporally-based sliding windows, but on trajectory segments for which the cumulated displacement equals a class-based amount. Second, to tackle inter-class spatial variability, we define several competing classifiers with their dedicated curvilinear windows. Last, we address intra-class spatial variability by designing a fusion system that takes the decisions and confidence scores of every competing classifier into account. Extensive experiments on four challenging skeleton-based datasets demonstrate the relevance of the proposed approach for action recognition and online action detection.
doi_str_mv 10.1016/j.cviu.2018.07.003
format Article
fullrecord <record><control><sourceid>hal_cross</sourceid><recordid>TN_cdi_hal_primary_oai_HAL_hal_01856894v1</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S1077314218301127</els_id><sourcerecordid>oai_HAL_hal_01856894v1</sourcerecordid><originalsourceid>FETCH-LOGICAL-c378t-af5bd139d3f0f5384a9340ef7fe474adcc73925e8217021fd6c16133b86d6913</originalsourceid><addsrcrecordid>eNp9kLFOwzAQhi0EEqXwAkxeGRJ8cRIniKVKC0WqYOnAZrn2WXWVJpGdRuLtSShiZLpfp_876T5C7oHFwCB_PMR6cKc4YVDETMSM8QsyA1ayKOHZ5-WUhYg4pMk1uQnhwBhAWsKMvFenpePLJ1qd_OBq16Dy1LjQ1UrjEZue7lRAQ1XX-VbpPbWtp20zFSlfUqV71zbUYI8_6ZZcWVUHvPudc7J9WW2rdbT5eH2rFptIc1H0kbLZzgAvDbfMZrxIVclThlZYTEWqjNaCl0mGRQKCJWBNriEHzndFbvIS-Jw8nM_uVS07747Kf8lWOblebOS0Gz1keVGmw9RNzl3t2xA82j8AmJzkyYOc5MlJnmRCjvJG6PkM4fjE4NDLoB02Go3z46fStO4__BtA4ndL</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>CuDi3D: Curvilinear displacement based approach for online 3D action detection</title><source>Access via ScienceDirect (Elsevier)</source><creator>Boulahia, Said Yacine ; Anquetil, Eric ; Multon, Franck ; Kulpa, Richard</creator><creatorcontrib>Boulahia, Said Yacine ; Anquetil, Eric ; Multon, Franck ; Kulpa, Richard</creatorcontrib><description>•A step by step approach that addresses the OAD problem in a transparent manner.•A novel and robust online segmentation paradigm for skeleton-based action detection.•Addressing temporal variability with curvilinear windows.•Addressing inter-class spatial variability using multiple classifiers.•Addressing intra-class spatial variability with a decision fusion system.•Better experimental results reported on four skeleton-based benchmarks. Being able to interactively detect and recognize 3D actions based on skeleton data, in unsegmented streams, has become an important computer vision topic. It raises three scientific problems in relation with variability. The first one is the temporal variability that occurs when subjects perform gestures with different speeds. The second one is the inter-class spatial variability, which refers to disparities between the displacement amounts induced by different classes (i.e. long vs. short movements). The last one is the intra-class spatial variability caused by differences in style and gesture amplitude. In this paper, we design an original approach that better considers these three issues. To address temporal variability we introduce the notion of curvilinear segmentation. It consists in extracting features, not on temporally-based sliding windows, but on trajectory segments for which the cumulated displacement equals a class-based amount. Second, to tackle inter-class spatial variability, we define several competing classifiers with their dedicated curvilinear windows. Last, we address intra-class spatial variability by designing a fusion system that takes the decisions and confidence scores of every competing classifier into account. Extensive experiments on four challenging skeleton-based datasets demonstrate the relevance of the proposed approach for action recognition and online action detection.</description><identifier>ISSN: 1077-3142</identifier><identifier>EISSN: 1090-235X</identifier><identifier>DOI: 10.1016/j.cviu.2018.07.003</identifier><language>eng</language><publisher>Elsevier Inc</publisher><subject>Computer Science ; Computer Vision and Pattern Recognition ; Curvilinear displacement ; Graphics ; Human action detection ; Human-Computer Interaction ; Machine Learning ; Multimedia ; Online action recognition ; Online segmentation ; Skeleton data stream ; Skeleton-based approach</subject><ispartof>Computer vision and image understanding, 2018-09, Vol.174, p.57-69</ispartof><rights>2018 Elsevier Inc.</rights><rights>Distributed under a Creative Commons Attribution 4.0 International License</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c378t-af5bd139d3f0f5384a9340ef7fe474adcc73925e8217021fd6c16133b86d6913</citedby><cites>FETCH-LOGICAL-c378t-af5bd139d3f0f5384a9340ef7fe474adcc73925e8217021fd6c16133b86d6913</cites><orcidid>0000-0003-2690-0077 ; 0000-0002-1863-8921 ; 0000-0002-1760-5095</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://dx.doi.org/10.1016/j.cviu.2018.07.003$$EHTML$$P50$$Gelsevier$$H</linktohtml><link.rule.ids>230,314,780,784,885,3550,27924,27925,45995</link.rule.ids><backlink>$$Uhttps://inria.hal.science/hal-01856894$$DView record in HAL$$Hfree_for_read</backlink></links><search><creatorcontrib>Boulahia, Said Yacine</creatorcontrib><creatorcontrib>Anquetil, Eric</creatorcontrib><creatorcontrib>Multon, Franck</creatorcontrib><creatorcontrib>Kulpa, Richard</creatorcontrib><title>CuDi3D: Curvilinear displacement based approach for online 3D action detection</title><title>Computer vision and image understanding</title><description>•A step by step approach that addresses the OAD problem in a transparent manner.•A novel and robust online segmentation paradigm for skeleton-based action detection.•Addressing temporal variability with curvilinear windows.•Addressing inter-class spatial variability using multiple classifiers.•Addressing intra-class spatial variability with a decision fusion system.•Better experimental results reported on four skeleton-based benchmarks. Being able to interactively detect and recognize 3D actions based on skeleton data, in unsegmented streams, has become an important computer vision topic. It raises three scientific problems in relation with variability. The first one is the temporal variability that occurs when subjects perform gestures with different speeds. The second one is the inter-class spatial variability, which refers to disparities between the displacement amounts induced by different classes (i.e. long vs. short movements). The last one is the intra-class spatial variability caused by differences in style and gesture amplitude. In this paper, we design an original approach that better considers these three issues. To address temporal variability we introduce the notion of curvilinear segmentation. It consists in extracting features, not on temporally-based sliding windows, but on trajectory segments for which the cumulated displacement equals a class-based amount. Second, to tackle inter-class spatial variability, we define several competing classifiers with their dedicated curvilinear windows. Last, we address intra-class spatial variability by designing a fusion system that takes the decisions and confidence scores of every competing classifier into account. Extensive experiments on four challenging skeleton-based datasets demonstrate the relevance of the proposed approach for action recognition and online action detection.</description><subject>Computer Science</subject><subject>Computer Vision and Pattern Recognition</subject><subject>Curvilinear displacement</subject><subject>Graphics</subject><subject>Human action detection</subject><subject>Human-Computer Interaction</subject><subject>Machine Learning</subject><subject>Multimedia</subject><subject>Online action recognition</subject><subject>Online segmentation</subject><subject>Skeleton data stream</subject><subject>Skeleton-based approach</subject><issn>1077-3142</issn><issn>1090-235X</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><recordid>eNp9kLFOwzAQhi0EEqXwAkxeGRJ8cRIniKVKC0WqYOnAZrn2WXWVJpGdRuLtSShiZLpfp_876T5C7oHFwCB_PMR6cKc4YVDETMSM8QsyA1ayKOHZ5-WUhYg4pMk1uQnhwBhAWsKMvFenpePLJ1qd_OBq16Dy1LjQ1UrjEZue7lRAQ1XX-VbpPbWtp20zFSlfUqV71zbUYI8_6ZZcWVUHvPudc7J9WW2rdbT5eH2rFptIc1H0kbLZzgAvDbfMZrxIVclThlZYTEWqjNaCl0mGRQKCJWBNriEHzndFbvIS-Jw8nM_uVS07747Kf8lWOblebOS0Gz1keVGmw9RNzl3t2xA82j8AmJzkyYOc5MlJnmRCjvJG6PkM4fjE4NDLoB02Go3z46fStO4__BtA4ndL</recordid><startdate>20180901</startdate><enddate>20180901</enddate><creator>Boulahia, Said Yacine</creator><creator>Anquetil, Eric</creator><creator>Multon, Franck</creator><creator>Kulpa, Richard</creator><general>Elsevier Inc</general><general>Elsevier</general><scope>AAYXX</scope><scope>CITATION</scope><scope>1XC</scope><scope>VOOES</scope><orcidid>https://orcid.org/0000-0003-2690-0077</orcidid><orcidid>https://orcid.org/0000-0002-1863-8921</orcidid><orcidid>https://orcid.org/0000-0002-1760-5095</orcidid></search><sort><creationdate>20180901</creationdate><title>CuDi3D: Curvilinear displacement based approach for online 3D action detection</title><author>Boulahia, Said Yacine ; Anquetil, Eric ; Multon, Franck ; Kulpa, Richard</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c378t-af5bd139d3f0f5384a9340ef7fe474adcc73925e8217021fd6c16133b86d6913</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Computer Science</topic><topic>Computer Vision and Pattern Recognition</topic><topic>Curvilinear displacement</topic><topic>Graphics</topic><topic>Human action detection</topic><topic>Human-Computer Interaction</topic><topic>Machine Learning</topic><topic>Multimedia</topic><topic>Online action recognition</topic><topic>Online segmentation</topic><topic>Skeleton data stream</topic><topic>Skeleton-based approach</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Boulahia, Said Yacine</creatorcontrib><creatorcontrib>Anquetil, Eric</creatorcontrib><creatorcontrib>Multon, Franck</creatorcontrib><creatorcontrib>Kulpa, Richard</creatorcontrib><collection>CrossRef</collection><collection>Hyper Article en Ligne (HAL)</collection><collection>Hyper Article en Ligne (HAL) (Open Access)</collection><jtitle>Computer vision and image understanding</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Boulahia, Said Yacine</au><au>Anquetil, Eric</au><au>Multon, Franck</au><au>Kulpa, Richard</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>CuDi3D: Curvilinear displacement based approach for online 3D action detection</atitle><jtitle>Computer vision and image understanding</jtitle><date>2018-09-01</date><risdate>2018</risdate><volume>174</volume><spage>57</spage><epage>69</epage><pages>57-69</pages><issn>1077-3142</issn><eissn>1090-235X</eissn><abstract>•A step by step approach that addresses the OAD problem in a transparent manner.•A novel and robust online segmentation paradigm for skeleton-based action detection.•Addressing temporal variability with curvilinear windows.•Addressing inter-class spatial variability using multiple classifiers.•Addressing intra-class spatial variability with a decision fusion system.•Better experimental results reported on four skeleton-based benchmarks. Being able to interactively detect and recognize 3D actions based on skeleton data, in unsegmented streams, has become an important computer vision topic. It raises three scientific problems in relation with variability. The first one is the temporal variability that occurs when subjects perform gestures with different speeds. The second one is the inter-class spatial variability, which refers to disparities between the displacement amounts induced by different classes (i.e. long vs. short movements). The last one is the intra-class spatial variability caused by differences in style and gesture amplitude. In this paper, we design an original approach that better considers these three issues. To address temporal variability we introduce the notion of curvilinear segmentation. It consists in extracting features, not on temporally-based sliding windows, but on trajectory segments for which the cumulated displacement equals a class-based amount. Second, to tackle inter-class spatial variability, we define several competing classifiers with their dedicated curvilinear windows. Last, we address intra-class spatial variability by designing a fusion system that takes the decisions and confidence scores of every competing classifier into account. Extensive experiments on four challenging skeleton-based datasets demonstrate the relevance of the proposed approach for action recognition and online action detection.</abstract><pub>Elsevier Inc</pub><doi>10.1016/j.cviu.2018.07.003</doi><tpages>13</tpages><orcidid>https://orcid.org/0000-0003-2690-0077</orcidid><orcidid>https://orcid.org/0000-0002-1863-8921</orcidid><orcidid>https://orcid.org/0000-0002-1760-5095</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1077-3142
ispartof Computer vision and image understanding, 2018-09, Vol.174, p.57-69
issn 1077-3142
1090-235X
language eng
recordid cdi_hal_primary_oai_HAL_hal_01856894v1
source Access via ScienceDirect (Elsevier)
subjects Computer Science
Computer Vision and Pattern Recognition
Curvilinear displacement
Graphics
Human action detection
Human-Computer Interaction
Machine Learning
Multimedia
Online action recognition
Online segmentation
Skeleton data stream
Skeleton-based approach
title CuDi3D: Curvilinear displacement based approach for online 3D action detection
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-02T20%3A20%3A23IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-hal_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=CuDi3D:%20Curvilinear%20displacement%20based%20approach%20for%20online%203D%20action%20detection&rft.jtitle=Computer%20vision%20and%20image%20understanding&rft.au=Boulahia,%20Said%20Yacine&rft.date=2018-09-01&rft.volume=174&rft.spage=57&rft.epage=69&rft.pages=57-69&rft.issn=1077-3142&rft.eissn=1090-235X&rft_id=info:doi/10.1016/j.cviu.2018.07.003&rft_dat=%3Chal_cross%3Eoai_HAL_hal_01856894v1%3C/hal_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_els_id=S1077314218301127&rfr_iscdi=true