Motion pattern based representation for improving human action retrieval

In recent years, many videos have been shared on the Internet. Finding desired videos among the large amount of video files is known as a major challenge in big data. Due to various problems in tag based search methods, content based search has been introduced as an alternative way for finding desir...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Multimedia tools and applications 2018-10, Vol.77 (19), p.26009-26032
Hauptverfasser: Ramezani, Mohsen, Yaghmaee, Farzin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 26032
container_issue 19
container_start_page 26009
container_title Multimedia tools and applications
container_volume 77
creator Ramezani, Mohsen
Yaghmaee, Farzin
description In recent years, many videos have been shared on the Internet. Finding desired videos among the large amount of video files is known as a major challenge in big data. Due to various problems in tag based search methods, content based search has been introduced as an alternative way for finding desired videos. On the other hand, there are many relative videos to humans and human actions are considered as the basis content for searching videos that is called human action retrieval. Action retrieval methods use some features (whether global or local) to represent the actions. Due to the stability of local features against noise, local features are used for representing the human action. In this paper, we represent the human actions by the pattern of body motions. Focusing on patterns can capture more accurate information about the actions. To this end, the complexity of the pattern of the body motions on different scales of coordinate axes is calculated to describe the body motion and the human action. The complexity of motion patterns is calculated by inspiration from the fractal dimension calculation. Our method can discriminate different actions, which have similar movements, more accurate than the state-of-the-art methods. Experimental results on KTH, UCFYT, and HMDB datasets shows the better performance of our method than the state-of-the-art methods.
doi_str_mv 10.1007/s11042-018-5835-6
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2015530378</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2015530378</sourcerecordid><originalsourceid>FETCH-LOGICAL-c316t-6f181c2aabedfff0bdef935a4f56eecccce2073aca9eda6708d548e3e030aed3</originalsourceid><addsrcrecordid>eNp1kEFPwzAMhSMEEmPwA7hV4hywm6bpjmgChjTEZffIa53RaUtL0k3i3xNWJE74YkvvPdv6hLhFuEcA8xARocglYCV1pbQsz8QEtVHSmBzP06wqkEYDXoqrGLcAWOq8mIjFWze0nc96GgYOPltT5CYL3AeO7Ac6ia4LWbvvQ3ds_Sb7OOzJZ1SfpMBDaPlIu2tx4WgX-ea3T8Xq-Wk1X8jl-8vr_HEpa4XlIEuHFdY50Zob5xysG3YzpalwumSuU3EORlFNM26oNFA1uqhYMSggbtRU3I1r0zefB46D3XaH4NNFmwNqrUCZKrlwdNWhizGws31o9xS-LIL94WVHXjbxsj-8bJky-ZiJyes3HP42_x_6BrqYb_w</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2015530378</pqid></control><display><type>article</type><title>Motion pattern based representation for improving human action retrieval</title><source>SpringerLink Journals - AutoHoldings</source><creator>Ramezani, Mohsen ; Yaghmaee, Farzin</creator><creatorcontrib>Ramezani, Mohsen ; Yaghmaee, Farzin</creatorcontrib><description>In recent years, many videos have been shared on the Internet. Finding desired videos among the large amount of video files is known as a major challenge in big data. Due to various problems in tag based search methods, content based search has been introduced as an alternative way for finding desired videos. On the other hand, there are many relative videos to humans and human actions are considered as the basis content for searching videos that is called human action retrieval. Action retrieval methods use some features (whether global or local) to represent the actions. Due to the stability of local features against noise, local features are used for representing the human action. In this paper, we represent the human actions by the pattern of body motions. Focusing on patterns can capture more accurate information about the actions. To this end, the complexity of the pattern of the body motions on different scales of coordinate axes is calculated to describe the body motion and the human action. The complexity of motion patterns is calculated by inspiration from the fractal dimension calculation. Our method can discriminate different actions, which have similar movements, more accurate than the state-of-the-art methods. Experimental results on KTH, UCFYT, and HMDB datasets shows the better performance of our method than the state-of-the-art methods.</description><identifier>ISSN: 1380-7501</identifier><identifier>EISSN: 1573-7721</identifier><identifier>DOI: 10.1007/s11042-018-5835-6</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Complexity ; Computer Communication Networks ; Computer Science ; Data management ; Data Structures and Information Theory ; Human motion ; Mathematical analysis ; Multimedia Information Systems ; Retrieval ; Special Purpose and Application-Based Systems ; State of the art</subject><ispartof>Multimedia tools and applications, 2018-10, Vol.77 (19), p.26009-26032</ispartof><rights>Springer Science+Business Media, LLC, part of Springer Nature 2018</rights><rights>Multimedia Tools and Applications is a copyright of Springer, (2018). All Rights Reserved.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c316t-6f181c2aabedfff0bdef935a4f56eecccce2073aca9eda6708d548e3e030aed3</citedby><cites>FETCH-LOGICAL-c316t-6f181c2aabedfff0bdef935a4f56eecccce2073aca9eda6708d548e3e030aed3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11042-018-5835-6$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11042-018-5835-6$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,777,781,27905,27906,41469,42538,51300</link.rule.ids></links><search><creatorcontrib>Ramezani, Mohsen</creatorcontrib><creatorcontrib>Yaghmaee, Farzin</creatorcontrib><title>Motion pattern based representation for improving human action retrieval</title><title>Multimedia tools and applications</title><addtitle>Multimed Tools Appl</addtitle><description>In recent years, many videos have been shared on the Internet. Finding desired videos among the large amount of video files is known as a major challenge in big data. Due to various problems in tag based search methods, content based search has been introduced as an alternative way for finding desired videos. On the other hand, there are many relative videos to humans and human actions are considered as the basis content for searching videos that is called human action retrieval. Action retrieval methods use some features (whether global or local) to represent the actions. Due to the stability of local features against noise, local features are used for representing the human action. In this paper, we represent the human actions by the pattern of body motions. Focusing on patterns can capture more accurate information about the actions. To this end, the complexity of the pattern of the body motions on different scales of coordinate axes is calculated to describe the body motion and the human action. The complexity of motion patterns is calculated by inspiration from the fractal dimension calculation. Our method can discriminate different actions, which have similar movements, more accurate than the state-of-the-art methods. Experimental results on KTH, UCFYT, and HMDB datasets shows the better performance of our method than the state-of-the-art methods.</description><subject>Complexity</subject><subject>Computer Communication Networks</subject><subject>Computer Science</subject><subject>Data management</subject><subject>Data Structures and Information Theory</subject><subject>Human motion</subject><subject>Mathematical analysis</subject><subject>Multimedia Information Systems</subject><subject>Retrieval</subject><subject>Special Purpose and Application-Based Systems</subject><subject>State of the art</subject><issn>1380-7501</issn><issn>1573-7721</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><sourceid>8G5</sourceid><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><sourceid>GUQSH</sourceid><sourceid>M2O</sourceid><recordid>eNp1kEFPwzAMhSMEEmPwA7hV4hywm6bpjmgChjTEZffIa53RaUtL0k3i3xNWJE74YkvvPdv6hLhFuEcA8xARocglYCV1pbQsz8QEtVHSmBzP06wqkEYDXoqrGLcAWOq8mIjFWze0nc96GgYOPltT5CYL3AeO7Ac6ia4LWbvvQ3ds_Sb7OOzJZ1SfpMBDaPlIu2tx4WgX-ea3T8Xq-Wk1X8jl-8vr_HEpa4XlIEuHFdY50Zob5xysG3YzpalwumSuU3EORlFNM26oNFA1uqhYMSggbtRU3I1r0zefB46D3XaH4NNFmwNqrUCZKrlwdNWhizGws31o9xS-LIL94WVHXjbxsj-8bJky-ZiJyes3HP42_x_6BrqYb_w</recordid><startdate>20181001</startdate><enddate>20181001</enddate><creator>Ramezani, Mohsen</creator><creator>Yaghmaee, Farzin</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8AO</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>8G5</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>GUQSH</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>M2O</scope><scope>MBDVC</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>Q9U</scope></search><sort><creationdate>20181001</creationdate><title>Motion pattern based representation for improving human action retrieval</title><author>Ramezani, Mohsen ; Yaghmaee, Farzin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c316t-6f181c2aabedfff0bdef935a4f56eecccce2073aca9eda6708d548e3e030aed3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Complexity</topic><topic>Computer Communication Networks</topic><topic>Computer Science</topic><topic>Data management</topic><topic>Data Structures and Information Theory</topic><topic>Human motion</topic><topic>Mathematical analysis</topic><topic>Multimedia Information Systems</topic><topic>Retrieval</topic><topic>Special Purpose and Application-Based Systems</topic><topic>State of the art</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Ramezani, Mohsen</creatorcontrib><creatorcontrib>Yaghmaee, Farzin</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>ProQuest Pharma Collection</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>Research Library (Alumni Edition)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>Research Library Prep</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>Research Library</collection><collection>Research Library (Corporate)</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>One Business (ProQuest)</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ProQuest Central Basic</collection><jtitle>Multimedia tools and applications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Ramezani, Mohsen</au><au>Yaghmaee, Farzin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Motion pattern based representation for improving human action retrieval</atitle><jtitle>Multimedia tools and applications</jtitle><stitle>Multimed Tools Appl</stitle><date>2018-10-01</date><risdate>2018</risdate><volume>77</volume><issue>19</issue><spage>26009</spage><epage>26032</epage><pages>26009-26032</pages><issn>1380-7501</issn><eissn>1573-7721</eissn><abstract>In recent years, many videos have been shared on the Internet. Finding desired videos among the large amount of video files is known as a major challenge in big data. Due to various problems in tag based search methods, content based search has been introduced as an alternative way for finding desired videos. On the other hand, there are many relative videos to humans and human actions are considered as the basis content for searching videos that is called human action retrieval. Action retrieval methods use some features (whether global or local) to represent the actions. Due to the stability of local features against noise, local features are used for representing the human action. In this paper, we represent the human actions by the pattern of body motions. Focusing on patterns can capture more accurate information about the actions. To this end, the complexity of the pattern of the body motions on different scales of coordinate axes is calculated to describe the body motion and the human action. The complexity of motion patterns is calculated by inspiration from the fractal dimension calculation. Our method can discriminate different actions, which have similar movements, more accurate than the state-of-the-art methods. Experimental results on KTH, UCFYT, and HMDB datasets shows the better performance of our method than the state-of-the-art methods.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s11042-018-5835-6</doi><tpages>24</tpages></addata></record>
fulltext fulltext
identifier ISSN: 1380-7501
ispartof Multimedia tools and applications, 2018-10, Vol.77 (19), p.26009-26032
issn 1380-7501
1573-7721
language eng
recordid cdi_proquest_journals_2015530378
source SpringerLink Journals - AutoHoldings
subjects Complexity
Computer Communication Networks
Computer Science
Data management
Data Structures and Information Theory
Human motion
Mathematical analysis
Multimedia Information Systems
Retrieval
Special Purpose and Application-Based Systems
State of the art
title Motion pattern based representation for improving human action retrieval
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-17T22%3A58%3A45IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Motion%20pattern%20based%20representation%20for%20improving%20human%20action%20retrieval&rft.jtitle=Multimedia%20tools%20and%20applications&rft.au=Ramezani,%20Mohsen&rft.date=2018-10-01&rft.volume=77&rft.issue=19&rft.spage=26009&rft.epage=26032&rft.pages=26009-26032&rft.issn=1380-7501&rft.eissn=1573-7721&rft_id=info:doi/10.1007/s11042-018-5835-6&rft_dat=%3Cproquest_cross%3E2015530378%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2015530378&rft_id=info:pmid/&rfr_iscdi=true