Interaction behavior recognition from multiple views

This paper proposed a novel multi-view interactive behavior recognition method based on local self-similarity descriptors and graph shared multi-task learning. First, we proposed the composite interactive feature representation which encodes both the spatial distribution of local motion of interest...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of Central South University 2020, Vol.27 (1), p.101-113
Hauptverfasser: Xia, Li-min, Guo, Wei-ting, Wang, Hao
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 113
container_issue 1
container_start_page 101
container_title Journal of Central South University
container_volume 27
creator Xia, Li-min
Guo, Wei-ting
Wang, Hao
description This paper proposed a novel multi-view interactive behavior recognition method based on local self-similarity descriptors and graph shared multi-task learning. First, we proposed the composite interactive feature representation which encodes both the spatial distribution of local motion of interest points and their contexts. Furthermore, local self-similarity descriptor represented by temporal-pyramid bag of words (BOW) was applied to decreasing the influence of observation angle change on recognition and retaining the temporal information. For the purpose of exploring latent correlation between different interactive behaviors from different views and retaining specific information of each behaviors, graph shared multi-task learning was used to learn the corresponding interactive behavior recognition model. Experiment results showed the effectiveness of the proposed method in comparison with other state-of-the-art methods on the public databases CASIA, i3Dpose dataset and self-built database for interactive behavior recognition.
doi_str_mv 10.1007/s11771-020-4281-6
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2349197084</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2349197084</sourcerecordid><originalsourceid>FETCH-LOGICAL-c382t-6a2dba79ee4bd1697cf9457433548742563fb47c4aafca080eeb6f0aee5122fc3</originalsourceid><addsrcrecordid>eNp1kE1LAzEQhoMoWGp_gLcFz9Fkkk02Ryl-FApe9Byy6aRG2k1NthX_vduu4MnTDMPzvgMPIdec3XLG9F3hXGtOGTAqoeFUnZEJAGhaA4jzYWemptAYc0lmpcSWCQ5KKKMmRC66HrPzfUxd1eK7O8SUq4w-rbt4OoacttV2v-njboPVIeJXuSIXwW0Kzn7nlLw9PrzOn-ny5Wkxv19SLxroqXKwap02iLJdcWW0D0bWWgpRy0ZLqJUIrdReOhe8Yw1DbFVgDrHmAMGLKbkZe3c5fe6x9PYj7XM3vLQgpOFGs0YOFB8pn1MpGYPd5bh1-dtyZo9-7OjHDn7s0Y9VQwbGTBnYbo35r_n_0A8KxGgC</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2349197084</pqid></control><display><type>article</type><title>Interaction behavior recognition from multiple views</title><source>Alma/SFX Local Collection</source><source>SpringerLink Journals - AutoHoldings</source><creator>Xia, Li-min ; Guo, Wei-ting ; Wang, Hao</creator><creatorcontrib>Xia, Li-min ; Guo, Wei-ting ; Wang, Hao</creatorcontrib><description>This paper proposed a novel multi-view interactive behavior recognition method based on local self-similarity descriptors and graph shared multi-task learning. First, we proposed the composite interactive feature representation which encodes both the spatial distribution of local motion of interest points and their contexts. Furthermore, local self-similarity descriptor represented by temporal-pyramid bag of words (BOW) was applied to decreasing the influence of observation angle change on recognition and retaining the temporal information. For the purpose of exploring latent correlation between different interactive behaviors from different views and retaining specific information of each behaviors, graph shared multi-task learning was used to learn the corresponding interactive behavior recognition model. Experiment results showed the effectiveness of the proposed method in comparison with other state-of-the-art methods on the public databases CASIA, i3Dpose dataset and self-built database for interactive behavior recognition.</description><identifier>ISSN: 2095-2899</identifier><identifier>EISSN: 2227-5223</identifier><identifier>DOI: 10.1007/s11771-020-4281-6</identifier><language>eng</language><publisher>Changsha: Central South University</publisher><subject>Behavior ; Change detection ; Engineering ; Graphical representations ; Learning ; Metallic Materials ; Recognition ; Self-similarity ; Spatial distribution</subject><ispartof>Journal of Central South University, 2020, Vol.27 (1), p.101-113</ispartof><rights>Central South University Press and Springer-Verlag GmbH Germany, part of Springer Nature 2020</rights><rights>Central South University Press and Springer-Verlag GmbH Germany, part of Springer Nature 2020.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c382t-6a2dba79ee4bd1697cf9457433548742563fb47c4aafca080eeb6f0aee5122fc3</citedby><cites>FETCH-LOGICAL-c382t-6a2dba79ee4bd1697cf9457433548742563fb47c4aafca080eeb6f0aee5122fc3</cites><orcidid>0000-0002-2249-449X</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11771-020-4281-6$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11771-020-4281-6$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,41488,42557,51319</link.rule.ids></links><search><creatorcontrib>Xia, Li-min</creatorcontrib><creatorcontrib>Guo, Wei-ting</creatorcontrib><creatorcontrib>Wang, Hao</creatorcontrib><title>Interaction behavior recognition from multiple views</title><title>Journal of Central South University</title><addtitle>J. Cent. South Univ</addtitle><description>This paper proposed a novel multi-view interactive behavior recognition method based on local self-similarity descriptors and graph shared multi-task learning. First, we proposed the composite interactive feature representation which encodes both the spatial distribution of local motion of interest points and their contexts. Furthermore, local self-similarity descriptor represented by temporal-pyramid bag of words (BOW) was applied to decreasing the influence of observation angle change on recognition and retaining the temporal information. For the purpose of exploring latent correlation between different interactive behaviors from different views and retaining specific information of each behaviors, graph shared multi-task learning was used to learn the corresponding interactive behavior recognition model. Experiment results showed the effectiveness of the proposed method in comparison with other state-of-the-art methods on the public databases CASIA, i3Dpose dataset and self-built database for interactive behavior recognition.</description><subject>Behavior</subject><subject>Change detection</subject><subject>Engineering</subject><subject>Graphical representations</subject><subject>Learning</subject><subject>Metallic Materials</subject><subject>Recognition</subject><subject>Self-similarity</subject><subject>Spatial distribution</subject><issn>2095-2899</issn><issn>2227-5223</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><recordid>eNp1kE1LAzEQhoMoWGp_gLcFz9Fkkk02Ryl-FApe9Byy6aRG2k1NthX_vduu4MnTDMPzvgMPIdec3XLG9F3hXGtOGTAqoeFUnZEJAGhaA4jzYWemptAYc0lmpcSWCQ5KKKMmRC66HrPzfUxd1eK7O8SUq4w-rbt4OoacttV2v-njboPVIeJXuSIXwW0Kzn7nlLw9PrzOn-ny5Wkxv19SLxroqXKwap02iLJdcWW0D0bWWgpRy0ZLqJUIrdReOhe8Yw1DbFVgDrHmAMGLKbkZe3c5fe6x9PYj7XM3vLQgpOFGs0YOFB8pn1MpGYPd5bh1-dtyZo9-7OjHDn7s0Y9VQwbGTBnYbo35r_n_0A8KxGgC</recordid><startdate>2020</startdate><enddate>2020</enddate><creator>Xia, Li-min</creator><creator>Guo, Wei-ting</creator><creator>Wang, Hao</creator><general>Central South University</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0002-2249-449X</orcidid></search><sort><creationdate>2020</creationdate><title>Interaction behavior recognition from multiple views</title><author>Xia, Li-min ; Guo, Wei-ting ; Wang, Hao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c382t-6a2dba79ee4bd1697cf9457433548742563fb47c4aafca080eeb6f0aee5122fc3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Behavior</topic><topic>Change detection</topic><topic>Engineering</topic><topic>Graphical representations</topic><topic>Learning</topic><topic>Metallic Materials</topic><topic>Recognition</topic><topic>Self-similarity</topic><topic>Spatial distribution</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Xia, Li-min</creatorcontrib><creatorcontrib>Guo, Wei-ting</creatorcontrib><creatorcontrib>Wang, Hao</creatorcontrib><collection>CrossRef</collection><jtitle>Journal of Central South University</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Xia, Li-min</au><au>Guo, Wei-ting</au><au>Wang, Hao</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Interaction behavior recognition from multiple views</atitle><jtitle>Journal of Central South University</jtitle><stitle>J. Cent. South Univ</stitle><date>2020</date><risdate>2020</risdate><volume>27</volume><issue>1</issue><spage>101</spage><epage>113</epage><pages>101-113</pages><issn>2095-2899</issn><eissn>2227-5223</eissn><abstract>This paper proposed a novel multi-view interactive behavior recognition method based on local self-similarity descriptors and graph shared multi-task learning. First, we proposed the composite interactive feature representation which encodes both the spatial distribution of local motion of interest points and their contexts. Furthermore, local self-similarity descriptor represented by temporal-pyramid bag of words (BOW) was applied to decreasing the influence of observation angle change on recognition and retaining the temporal information. For the purpose of exploring latent correlation between different interactive behaviors from different views and retaining specific information of each behaviors, graph shared multi-task learning was used to learn the corresponding interactive behavior recognition model. Experiment results showed the effectiveness of the proposed method in comparison with other state-of-the-art methods on the public databases CASIA, i3Dpose dataset and self-built database for interactive behavior recognition.</abstract><cop>Changsha</cop><pub>Central South University</pub><doi>10.1007/s11771-020-4281-6</doi><tpages>13</tpages><orcidid>https://orcid.org/0000-0002-2249-449X</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 2095-2899
ispartof Journal of Central South University, 2020, Vol.27 (1), p.101-113
issn 2095-2899
2227-5223
language eng
recordid cdi_proquest_journals_2349197084
source Alma/SFX Local Collection; SpringerLink Journals - AutoHoldings
subjects Behavior
Change detection
Engineering
Graphical representations
Learning
Metallic Materials
Recognition
Self-similarity
Spatial distribution
title Interaction behavior recognition from multiple views
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-21T06%3A16%3A25IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Interaction%20behavior%20recognition%20from%20multiple%20views&rft.jtitle=Journal%20of%20Central%20South%20University&rft.au=Xia,%20Li-min&rft.date=2020&rft.volume=27&rft.issue=1&rft.spage=101&rft.epage=113&rft.pages=101-113&rft.issn=2095-2899&rft.eissn=2227-5223&rft_id=info:doi/10.1007/s11771-020-4281-6&rft_dat=%3Cproquest_cross%3E2349197084%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2349197084&rft_id=info:pmid/&rfr_iscdi=true