Temporal-Based Action Clustering for Motion Tendencies

Video-based action recognition encompasses the recognition of appearance and the classification of action types. This work proposes a discrete-temporal-sequence-based motion tendency clustering framework to implement motion clustering by extracting motion tendencies and self-supervised learning. A p...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEICE Transactions on Information and Systems 2023/08/01, Vol.E106.D(8), pp.1292-1295
Hauptverfasser: QIAN, Xingyu, CHEN, Xiaogang, YUEMAIER, Aximu, LI, Shunfen, DAI, Weibang, SONG, Zhitang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 1295
container_issue 8
container_start_page 1292
container_title IEICE Transactions on Information and Systems
container_volume E106.D
creator QIAN, Xingyu
CHEN, Xiaogang
YUEMAIER, Aximu
LI, Shunfen
DAI, Weibang
SONG, Zhitang
description Video-based action recognition encompasses the recognition of appearance and the classification of action types. This work proposes a discrete-temporal-sequence-based motion tendency clustering framework to implement motion clustering by extracting motion tendencies and self-supervised learning. A published traffic intersection dataset (inD) and a self-produced gesture video set are used for evaluation and to validate the motion tendency action recognition hypothesis.
doi_str_mv 10.1587/transinf.2023EDL8001
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2864814987</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2864814987</sourcerecordid><originalsourceid>FETCH-LOGICAL-c472t-1d0517e229d6b39b5d0a03e3ae5c0bc8821e73545cacd99893ee52e6eb54995c3</originalsourceid><addsrcrecordid>eNpNkMtOwzAQRS0EEqXwBywisU7xI3bsZekDkIqQUFlbjjMpqVK72O6CvydQWrqa0eicmdFF6JbgEeGyvE_BuNi6ZkQxZbPpQmJMztCAlAXPCRPkHA2wIiKXnNFLdBXjugckJXyAxBI2Wx9Mlz-YCHU2tqn1Lpt0u5ggtG6VNT5kL_53ugRXg7MtxGt00Zguws1fHaL3-Ww5ecoXr4_Pk_Eit0VJU05qzEkJlKpaVExVvMYGM2AGuMWVlf0PUDJecGtsrZRUDIBTEFDxQilu2RDd7fdug__cQUx67XfB9Sc1laKQpFCy7KliT9ngYwzQ6G1oNyZ8aYL1T0L6kJA-SajX3vbaOiazgqNkQmptB__SjGChp1oempMlR9h-mKDBsW8GS3fn</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2864814987</pqid></control><display><type>article</type><title>Temporal-Based Action Clustering for Motion Tendencies</title><source>J-STAGE Free</source><source>Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals</source><creator>QIAN, Xingyu ; CHEN, Xiaogang ; YUEMAIER, Aximu ; LI, Shunfen ; DAI, Weibang ; SONG, Zhitang</creator><creatorcontrib>QIAN, Xingyu ; CHEN, Xiaogang ; YUEMAIER, Aximu ; LI, Shunfen ; DAI, Weibang ; SONG, Zhitang</creatorcontrib><description>Video-based action recognition encompasses the recognition of appearance and the classification of action types. This work proposes a discrete-temporal-sequence-based motion tendency clustering framework to implement motion clustering by extracting motion tendencies and self-supervised learning. A published traffic intersection dataset (inD) and a self-produced gesture video set are used for evaluation and to validate the motion tendency action recognition hypothesis.</description><identifier>ISSN: 0916-8532</identifier><identifier>EISSN: 1745-1361</identifier><identifier>DOI: 10.1587/transinf.2023EDL8001</identifier><language>eng</language><publisher>Tokyo: The Institute of Electronics, Information and Communication Engineers</publisher><subject>action clustering ; Activity recognition ; Clustering ; continuous learning ; Machine learning ; motion-tendency ; Self-supervised learning ; self-supervision</subject><ispartof>IEICE Transactions on Information and Systems, 2023/08/01, Vol.E106.D(8), pp.1292-1295</ispartof><rights>2023 The Institute of Electronics, Information and Communication Engineers</rights><rights>Copyright Japan Science and Technology Agency 2023</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c472t-1d0517e229d6b39b5d0a03e3ae5c0bc8821e73545cacd99893ee52e6eb54995c3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,1877,27901,27902</link.rule.ids></links><search><creatorcontrib>QIAN, Xingyu</creatorcontrib><creatorcontrib>CHEN, Xiaogang</creatorcontrib><creatorcontrib>YUEMAIER, Aximu</creatorcontrib><creatorcontrib>LI, Shunfen</creatorcontrib><creatorcontrib>DAI, Weibang</creatorcontrib><creatorcontrib>SONG, Zhitang</creatorcontrib><title>Temporal-Based Action Clustering for Motion Tendencies</title><title>IEICE Transactions on Information and Systems</title><addtitle>IEICE Trans. Inf. &amp; Syst.</addtitle><description>Video-based action recognition encompasses the recognition of appearance and the classification of action types. This work proposes a discrete-temporal-sequence-based motion tendency clustering framework to implement motion clustering by extracting motion tendencies and self-supervised learning. A published traffic intersection dataset (inD) and a self-produced gesture video set are used for evaluation and to validate the motion tendency action recognition hypothesis.</description><subject>action clustering</subject><subject>Activity recognition</subject><subject>Clustering</subject><subject>continuous learning</subject><subject>Machine learning</subject><subject>motion-tendency</subject><subject>Self-supervised learning</subject><subject>self-supervision</subject><issn>0916-8532</issn><issn>1745-1361</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNpNkMtOwzAQRS0EEqXwBywisU7xI3bsZekDkIqQUFlbjjMpqVK72O6CvydQWrqa0eicmdFF6JbgEeGyvE_BuNi6ZkQxZbPpQmJMztCAlAXPCRPkHA2wIiKXnNFLdBXjugckJXyAxBI2Wx9Mlz-YCHU2tqn1Lpt0u5ggtG6VNT5kL_53ugRXg7MtxGt00Zguws1fHaL3-Ww5ecoXr4_Pk_Eit0VJU05qzEkJlKpaVExVvMYGM2AGuMWVlf0PUDJecGtsrZRUDIBTEFDxQilu2RDd7fdug__cQUx67XfB9Sc1laKQpFCy7KliT9ngYwzQ6G1oNyZ8aYL1T0L6kJA-SajX3vbaOiazgqNkQmptB__SjGChp1oempMlR9h-mKDBsW8GS3fn</recordid><startdate>20230801</startdate><enddate>20230801</enddate><creator>QIAN, Xingyu</creator><creator>CHEN, Xiaogang</creator><creator>YUEMAIER, Aximu</creator><creator>LI, Shunfen</creator><creator>DAI, Weibang</creator><creator>SONG, Zhitang</creator><general>The Institute of Electronics, Information and Communication Engineers</general><general>Japan Science and Technology Agency</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>20230801</creationdate><title>Temporal-Based Action Clustering for Motion Tendencies</title><author>QIAN, Xingyu ; CHEN, Xiaogang ; YUEMAIER, Aximu ; LI, Shunfen ; DAI, Weibang ; SONG, Zhitang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c472t-1d0517e229d6b39b5d0a03e3ae5c0bc8821e73545cacd99893ee52e6eb54995c3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>action clustering</topic><topic>Activity recognition</topic><topic>Clustering</topic><topic>continuous learning</topic><topic>Machine learning</topic><topic>motion-tendency</topic><topic>Self-supervised learning</topic><topic>self-supervision</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>QIAN, Xingyu</creatorcontrib><creatorcontrib>CHEN, Xiaogang</creatorcontrib><creatorcontrib>YUEMAIER, Aximu</creatorcontrib><creatorcontrib>LI, Shunfen</creatorcontrib><creatorcontrib>DAI, Weibang</creatorcontrib><creatorcontrib>SONG, Zhitang</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEICE Transactions on Information and Systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>QIAN, Xingyu</au><au>CHEN, Xiaogang</au><au>YUEMAIER, Aximu</au><au>LI, Shunfen</au><au>DAI, Weibang</au><au>SONG, Zhitang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Temporal-Based Action Clustering for Motion Tendencies</atitle><jtitle>IEICE Transactions on Information and Systems</jtitle><addtitle>IEICE Trans. Inf. &amp; Syst.</addtitle><date>2023-08-01</date><risdate>2023</risdate><volume>E106.D</volume><issue>8</issue><spage>1292</spage><epage>1295</epage><pages>1292-1295</pages><artnum>2023EDL8001</artnum><issn>0916-8532</issn><eissn>1745-1361</eissn><abstract>Video-based action recognition encompasses the recognition of appearance and the classification of action types. This work proposes a discrete-temporal-sequence-based motion tendency clustering framework to implement motion clustering by extracting motion tendencies and self-supervised learning. A published traffic intersection dataset (inD) and a self-produced gesture video set are used for evaluation and to validate the motion tendency action recognition hypothesis.</abstract><cop>Tokyo</cop><pub>The Institute of Electronics, Information and Communication Engineers</pub><doi>10.1587/transinf.2023EDL8001</doi><tpages>4</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0916-8532
ispartof IEICE Transactions on Information and Systems, 2023/08/01, Vol.E106.D(8), pp.1292-1295
issn 0916-8532
1745-1361
language eng
recordid cdi_proquest_journals_2864814987
source J-STAGE Free; Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals
subjects action clustering
Activity recognition
Clustering
continuous learning
Machine learning
motion-tendency
Self-supervised learning
self-supervision
title Temporal-Based Action Clustering for Motion Tendencies
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-29T22%3A18%3A57IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Temporal-Based%20Action%20Clustering%20for%20Motion%20Tendencies&rft.jtitle=IEICE%20Transactions%20on%20Information%20and%20Systems&rft.au=QIAN,%20Xingyu&rft.date=2023-08-01&rft.volume=E106.D&rft.issue=8&rft.spage=1292&rft.epage=1295&rft.pages=1292-1295&rft.artnum=2023EDL8001&rft.issn=0916-8532&rft.eissn=1745-1361&rft_id=info:doi/10.1587/transinf.2023EDL8001&rft_dat=%3Cproquest_cross%3E2864814987%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2864814987&rft_id=info:pmid/&rfr_iscdi=true