Spatial-temporal motion information integration for action detection and recognition in non-static background
Various motion detection methods have been proposed in the past decade, but there are seldom attempts to investigate the advantages and disadvantages of different detection mechanisms so that they can complement each other to achieve a better performance. Toward such a demand, this paper proposes a...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 633 |
---|---|
container_issue | |
container_start_page | 626 |
container_title | |
container_volume | |
creator | Dianting Liu Mei-Ling Shyu Guiru Zhao |
description | Various motion detection methods have been proposed in the past decade, but there are seldom attempts to investigate the advantages and disadvantages of different detection mechanisms so that they can complement each other to achieve a better performance. Toward such a demand, this paper proposes a human action detection and recognition framework to bridge the semantic gap between low-level pixel intensity change and the high-level understanding of the meaning of an action. To achieve a robust estimation of the region of action with the complexities of an uncontrolled background, we propose the combination of the optical flow field and Harris3D corner detector to obtain a new spatial-temporal estimation in the video sequences. The action detection method, considering the integrated motion information, works well with the dynamic background and camera motion, and demonstrates the advantage of the proposed method of integrating multiple spatial-temporal cues. Then the local features (SIFT and STIP) extracted from the estimated region of action are used to learn the Universal Background Model (UBM) for the action recognition task. The experimental results on KTH and UCF YouTube Action (UCF11) data sets show that the proposed action detection and recognition framework can not only better estimate the region of action but also achieve better recognition accuracy comparing with the peer work. |
doi_str_mv | 10.1109/IRI.2013.6642527 |
format | Conference Proceeding |
fullrecord | <record><control><sourceid>ieee_6IE</sourceid><recordid>TN_cdi_ieee_primary_6642527</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>6642527</ieee_id><sourcerecordid>6642527</sourcerecordid><originalsourceid>FETCH-LOGICAL-i90t-57c8f1bb9f0bf1027dbb44a07eb1a7d1bb1727803a3991b0981908849ce9efd23</originalsourceid><addsrcrecordid>eNotkMtKxTAQhuNCUI7dC27yAq2T9JJkKQcvhQOCnv0h1xJtk5LGhW9vsV39H__wDcwgdE-gIgTEY__RVxRIXXVdQ1vKrlAhGCcNE4JAC_QGFcvyBQBEdKzrxC2aPmeZvRzLbKc5JjniKWYfA_bBxTTJnbMd0sZri6X-R2Oz3UgGg5PVcQh-F3CIoVzy6mispP4eUvwJ5g5dOzkuttjzgM4vz-fjW3l6f-2PT6fSC8hlyzR3RCnhQDkClBmlmkYCs4pIZtYJYZRxqGW93qVAcCKA80ZoK6wztD6gh22tt9Ze5uQnmX4v-0_qP5UgWeM</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Spatial-temporal motion information integration for action detection and recognition in non-static background</title><source>IEEE Electronic Library (IEL) Conference Proceedings</source><creator>Dianting Liu ; Mei-Ling Shyu ; Guiru Zhao</creator><creatorcontrib>Dianting Liu ; Mei-Ling Shyu ; Guiru Zhao</creatorcontrib><description>Various motion detection methods have been proposed in the past decade, but there are seldom attempts to investigate the advantages and disadvantages of different detection mechanisms so that they can complement each other to achieve a better performance. Toward such a demand, this paper proposes a human action detection and recognition framework to bridge the semantic gap between low-level pixel intensity change and the high-level understanding of the meaning of an action. To achieve a robust estimation of the region of action with the complexities of an uncontrolled background, we propose the combination of the optical flow field and Harris3D corner detector to obtain a new spatial-temporal estimation in the video sequences. The action detection method, considering the integrated motion information, works well with the dynamic background and camera motion, and demonstrates the advantage of the proposed method of integrating multiple spatial-temporal cues. Then the local features (SIFT and STIP) extracted from the estimated region of action are used to learn the Universal Background Model (UBM) for the action recognition task. The experimental results on KTH and UCF YouTube Action (UCF11) data sets show that the proposed action detection and recognition framework can not only better estimate the region of action but also achieve better recognition accuracy comparing with the peer work.</description><identifier>EISBN: 9781479910502</identifier><identifier>EISBN: 1479910503</identifier><identifier>DOI: 10.1109/IRI.2013.6642527</identifier><language>eng</language><publisher>IEEE</publisher><subject>Action Detection ; Action Recognition ; Cameras ; Detectors ; Estimation ; Feature extraction ; Gaussian Mixture Models (GMM) ; GMM Supervector ; Optical imaging ; Spatio-temporal Motion Information Integration ; Universal Background Model (UBM) ; Vectors ; Video sequences</subject><ispartof>2013 IEEE 14th International Conference on Information Reuse & Integration (IRI), 2013, p.626-633</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/6642527$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>310,311,781,785,790,791,2059,27930,54925</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/6642527$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Dianting Liu</creatorcontrib><creatorcontrib>Mei-Ling Shyu</creatorcontrib><creatorcontrib>Guiru Zhao</creatorcontrib><title>Spatial-temporal motion information integration for action detection and recognition in non-static background</title><title>2013 IEEE 14th International Conference on Information Reuse & Integration (IRI)</title><addtitle>IRI</addtitle><description>Various motion detection methods have been proposed in the past decade, but there are seldom attempts to investigate the advantages and disadvantages of different detection mechanisms so that they can complement each other to achieve a better performance. Toward such a demand, this paper proposes a human action detection and recognition framework to bridge the semantic gap between low-level pixel intensity change and the high-level understanding of the meaning of an action. To achieve a robust estimation of the region of action with the complexities of an uncontrolled background, we propose the combination of the optical flow field and Harris3D corner detector to obtain a new spatial-temporal estimation in the video sequences. The action detection method, considering the integrated motion information, works well with the dynamic background and camera motion, and demonstrates the advantage of the proposed method of integrating multiple spatial-temporal cues. Then the local features (SIFT and STIP) extracted from the estimated region of action are used to learn the Universal Background Model (UBM) for the action recognition task. The experimental results on KTH and UCF YouTube Action (UCF11) data sets show that the proposed action detection and recognition framework can not only better estimate the region of action but also achieve better recognition accuracy comparing with the peer work.</description><subject>Action Detection</subject><subject>Action Recognition</subject><subject>Cameras</subject><subject>Detectors</subject><subject>Estimation</subject><subject>Feature extraction</subject><subject>Gaussian Mixture Models (GMM)</subject><subject>GMM Supervector</subject><subject>Optical imaging</subject><subject>Spatio-temporal Motion Information Integration</subject><subject>Universal Background Model (UBM)</subject><subject>Vectors</subject><subject>Video sequences</subject><isbn>9781479910502</isbn><isbn>1479910503</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2013</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><sourceid>RIE</sourceid><recordid>eNotkMtKxTAQhuNCUI7dC27yAq2T9JJkKQcvhQOCnv0h1xJtk5LGhW9vsV39H__wDcwgdE-gIgTEY__RVxRIXXVdQ1vKrlAhGCcNE4JAC_QGFcvyBQBEdKzrxC2aPmeZvRzLbKc5JjniKWYfA_bBxTTJnbMd0sZri6X-R2Oz3UgGg5PVcQh-F3CIoVzy6mispP4eUvwJ5g5dOzkuttjzgM4vz-fjW3l6f-2PT6fSC8hlyzR3RCnhQDkClBmlmkYCs4pIZtYJYZRxqGW93qVAcCKA80ZoK6wztD6gh22tt9Ze5uQnmX4v-0_qP5UgWeM</recordid><startdate>201308</startdate><enddate>201308</enddate><creator>Dianting Liu</creator><creator>Mei-Ling Shyu</creator><creator>Guiru Zhao</creator><general>IEEE</general><scope>6IE</scope><scope>6IL</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIL</scope></search><sort><creationdate>201308</creationdate><title>Spatial-temporal motion information integration for action detection and recognition in non-static background</title><author>Dianting Liu ; Mei-Ling Shyu ; Guiru Zhao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i90t-57c8f1bb9f0bf1027dbb44a07eb1a7d1bb1727803a3991b0981908849ce9efd23</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2013</creationdate><topic>Action Detection</topic><topic>Action Recognition</topic><topic>Cameras</topic><topic>Detectors</topic><topic>Estimation</topic><topic>Feature extraction</topic><topic>Gaussian Mixture Models (GMM)</topic><topic>GMM Supervector</topic><topic>Optical imaging</topic><topic>Spatio-temporal Motion Information Integration</topic><topic>Universal Background Model (UBM)</topic><topic>Vectors</topic><topic>Video sequences</topic><toplevel>online_resources</toplevel><creatorcontrib>Dianting Liu</creatorcontrib><creatorcontrib>Mei-Ling Shyu</creatorcontrib><creatorcontrib>Guiru Zhao</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP All) 1998-Present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Dianting Liu</au><au>Mei-Ling Shyu</au><au>Guiru Zhao</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Spatial-temporal motion information integration for action detection and recognition in non-static background</atitle><btitle>2013 IEEE 14th International Conference on Information Reuse & Integration (IRI)</btitle><stitle>IRI</stitle><date>2013-08</date><risdate>2013</risdate><spage>626</spage><epage>633</epage><pages>626-633</pages><eisbn>9781479910502</eisbn><eisbn>1479910503</eisbn><abstract>Various motion detection methods have been proposed in the past decade, but there are seldom attempts to investigate the advantages and disadvantages of different detection mechanisms so that they can complement each other to achieve a better performance. Toward such a demand, this paper proposes a human action detection and recognition framework to bridge the semantic gap between low-level pixel intensity change and the high-level understanding of the meaning of an action. To achieve a robust estimation of the region of action with the complexities of an uncontrolled background, we propose the combination of the optical flow field and Harris3D corner detector to obtain a new spatial-temporal estimation in the video sequences. The action detection method, considering the integrated motion information, works well with the dynamic background and camera motion, and demonstrates the advantage of the proposed method of integrating multiple spatial-temporal cues. Then the local features (SIFT and STIP) extracted from the estimated region of action are used to learn the Universal Background Model (UBM) for the action recognition task. The experimental results on KTH and UCF YouTube Action (UCF11) data sets show that the proposed action detection and recognition framework can not only better estimate the region of action but also achieve better recognition accuracy comparing with the peer work.</abstract><pub>IEEE</pub><doi>10.1109/IRI.2013.6642527</doi><tpages>8</tpages></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | EISBN: 9781479910502 |
ispartof | 2013 IEEE 14th International Conference on Information Reuse & Integration (IRI), 2013, p.626-633 |
issn | |
language | eng |
recordid | cdi_ieee_primary_6642527 |
source | IEEE Electronic Library (IEL) Conference Proceedings |
subjects | Action Detection Action Recognition Cameras Detectors Estimation Feature extraction Gaussian Mixture Models (GMM) GMM Supervector Optical imaging Spatio-temporal Motion Information Integration Universal Background Model (UBM) Vectors Video sequences |
title | Spatial-temporal motion information integration for action detection and recognition in non-static background |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-14T23%3A35%3A49IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_6IE&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Spatial-temporal%20motion%20information%20integration%20for%20action%20detection%20and%20recognition%20in%20non-static%20background&rft.btitle=2013%20IEEE%2014th%20International%20Conference%20on%20Information%20Reuse%20&%20Integration%20(IRI)&rft.au=Dianting%20Liu&rft.date=2013-08&rft.spage=626&rft.epage=633&rft.pages=626-633&rft_id=info:doi/10.1109/IRI.2013.6642527&rft_dat=%3Cieee_6IE%3E6642527%3C/ieee_6IE%3E%3Curl%3E%3C/url%3E&rft.eisbn=9781479910502&rft.eisbn_list=1479910503&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=6642527&rfr_iscdi=true |