METHODS OF REAL-TIME SPATIO-TEMPORAL ACTIVITY DETECTION AND CATEGORIZATION FROM UNTRIMMED VIDEO SEGMENTS
Methods of detecting and categorizing an action in an untrimmed video segment regardless of the scale of the action and the close proximity of other actions. The methods improve upon the prior art which either require trimmed video segments including only a single activity depicted therein, or untri...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Patent |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Tirupattur, Praveen Rawat, Yogesh Singh Rizve, Mamshad Nayeem Rana, Aayush Jung Bahadur Shah, Mubarak |
description | Methods of detecting and categorizing an action in an untrimmed video segment regardless of the scale of the action and the close proximity of other actions. The methods improve upon the prior art which either require trimmed video segments including only a single activity depicted therein, or untrimmed video segments including relatively few actions, persons, or objects of interest, thereby directing the classification. Instead, the methods utilize a plurality of tubelets used to represent discreet actions, persons, and objects of interest within the comprehensive untrimmed video segment. The tubelets are localized to correct for pixel-level foreground-background biases, which are then turned into short spatio-temporal action tubelets that are passed to a classification network to obtain multi-label predictions. After classification, the tubelets are be linked together to obtain the final detections with varying lengths, and the method merges the short action tubelets into final action detections. |
format | Patent |
fullrecord | <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_US2022222940A1</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>US2022222940A1</sourcerecordid><originalsourceid>FETCH-epo_espacenet_US2022222940A13</originalsourceid><addsrcrecordid>eNqNirsKwkAQRdNYiPoPA9aBGG0sh-wkWcjsyO4koE0IsmIhGoj_jw_8AG9zDoc7T65MWosJICV4wiZVywThgGolVeKDeGwAC7Wd1SMYUnq7OEBnoEClSrw94TeVXhhap94yk4HOGhIIVDE5DctkdhluU1z9uEjWJWlRp3F89HEah3O8x2ffhjzLP9vvMtxs_3u9ACgVNuc</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>METHODS OF REAL-TIME SPATIO-TEMPORAL ACTIVITY DETECTION AND CATEGORIZATION FROM UNTRIMMED VIDEO SEGMENTS</title><source>esp@cenet</source><creator>Tirupattur, Praveen ; Rawat, Yogesh Singh ; Rizve, Mamshad Nayeem ; Rana, Aayush Jung Bahadur ; Shah, Mubarak</creator><creatorcontrib>Tirupattur, Praveen ; Rawat, Yogesh Singh ; Rizve, Mamshad Nayeem ; Rana, Aayush Jung Bahadur ; Shah, Mubarak</creatorcontrib><description>Methods of detecting and categorizing an action in an untrimmed video segment regardless of the scale of the action and the close proximity of other actions. The methods improve upon the prior art which either require trimmed video segments including only a single activity depicted therein, or untrimmed video segments including relatively few actions, persons, or objects of interest, thereby directing the classification. Instead, the methods utilize a plurality of tubelets used to represent discreet actions, persons, and objects of interest within the comprehensive untrimmed video segment. The tubelets are localized to correct for pixel-level foreground-background biases, which are then turned into short spatio-temporal action tubelets that are passed to a classification network to obtain multi-label predictions. After classification, the tubelets are be linked together to obtain the final detections with varying lengths, and the method merges the short action tubelets into final action detections.</description><language>eng</language><subject>CALCULATING ; COMPUTING ; COUNTING ; PHYSICS</subject><creationdate>2022</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&date=20220714&DB=EPODOC&CC=US&NR=2022222940A1$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,776,881,25543,76294</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&date=20220714&DB=EPODOC&CC=US&NR=2022222940A1$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>Tirupattur, Praveen</creatorcontrib><creatorcontrib>Rawat, Yogesh Singh</creatorcontrib><creatorcontrib>Rizve, Mamshad Nayeem</creatorcontrib><creatorcontrib>Rana, Aayush Jung Bahadur</creatorcontrib><creatorcontrib>Shah, Mubarak</creatorcontrib><title>METHODS OF REAL-TIME SPATIO-TEMPORAL ACTIVITY DETECTION AND CATEGORIZATION FROM UNTRIMMED VIDEO SEGMENTS</title><description>Methods of detecting and categorizing an action in an untrimmed video segment regardless of the scale of the action and the close proximity of other actions. The methods improve upon the prior art which either require trimmed video segments including only a single activity depicted therein, or untrimmed video segments including relatively few actions, persons, or objects of interest, thereby directing the classification. Instead, the methods utilize a plurality of tubelets used to represent discreet actions, persons, and objects of interest within the comprehensive untrimmed video segment. The tubelets are localized to correct for pixel-level foreground-background biases, which are then turned into short spatio-temporal action tubelets that are passed to a classification network to obtain multi-label predictions. After classification, the tubelets are be linked together to obtain the final detections with varying lengths, and the method merges the short action tubelets into final action detections.</description><subject>CALCULATING</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>PHYSICS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2022</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNqNirsKwkAQRdNYiPoPA9aBGG0sh-wkWcjsyO4koE0IsmIhGoj_jw_8AG9zDoc7T65MWosJICV4wiZVywThgGolVeKDeGwAC7Wd1SMYUnq7OEBnoEClSrw94TeVXhhap94yk4HOGhIIVDE5DctkdhluU1z9uEjWJWlRp3F89HEah3O8x2ffhjzLP9vvMtxs_3u9ACgVNuc</recordid><startdate>20220714</startdate><enddate>20220714</enddate><creator>Tirupattur, Praveen</creator><creator>Rawat, Yogesh Singh</creator><creator>Rizve, Mamshad Nayeem</creator><creator>Rana, Aayush Jung Bahadur</creator><creator>Shah, Mubarak</creator><scope>EVB</scope></search><sort><creationdate>20220714</creationdate><title>METHODS OF REAL-TIME SPATIO-TEMPORAL ACTIVITY DETECTION AND CATEGORIZATION FROM UNTRIMMED VIDEO SEGMENTS</title><author>Tirupattur, Praveen ; Rawat, Yogesh Singh ; Rizve, Mamshad Nayeem ; Rana, Aayush Jung Bahadur ; Shah, Mubarak</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_US2022222940A13</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>eng</language><creationdate>2022</creationdate><topic>CALCULATING</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>PHYSICS</topic><toplevel>online_resources</toplevel><creatorcontrib>Tirupattur, Praveen</creatorcontrib><creatorcontrib>Rawat, Yogesh Singh</creatorcontrib><creatorcontrib>Rizve, Mamshad Nayeem</creatorcontrib><creatorcontrib>Rana, Aayush Jung Bahadur</creatorcontrib><creatorcontrib>Shah, Mubarak</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Tirupattur, Praveen</au><au>Rawat, Yogesh Singh</au><au>Rizve, Mamshad Nayeem</au><au>Rana, Aayush Jung Bahadur</au><au>Shah, Mubarak</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>METHODS OF REAL-TIME SPATIO-TEMPORAL ACTIVITY DETECTION AND CATEGORIZATION FROM UNTRIMMED VIDEO SEGMENTS</title><date>2022-07-14</date><risdate>2022</risdate><abstract>Methods of detecting and categorizing an action in an untrimmed video segment regardless of the scale of the action and the close proximity of other actions. The methods improve upon the prior art which either require trimmed video segments including only a single activity depicted therein, or untrimmed video segments including relatively few actions, persons, or objects of interest, thereby directing the classification. Instead, the methods utilize a plurality of tubelets used to represent discreet actions, persons, and objects of interest within the comprehensive untrimmed video segment. The tubelets are localized to correct for pixel-level foreground-background biases, which are then turned into short spatio-temporal action tubelets that are passed to a classification network to obtain multi-label predictions. After classification, the tubelets are be linked together to obtain the final detections with varying lengths, and the method merges the short action tubelets into final action detections.</abstract><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | |
ispartof | |
issn | |
language | eng |
recordid | cdi_epo_espacenet_US2022222940A1 |
source | esp@cenet |
subjects | CALCULATING COMPUTING COUNTING PHYSICS |
title | METHODS OF REAL-TIME SPATIO-TEMPORAL ACTIVITY DETECTION AND CATEGORIZATION FROM UNTRIMMED VIDEO SEGMENTS |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-21T23%3A22%3A17IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=Tirupattur,%20Praveen&rft.date=2022-07-14&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3EUS2022222940A1%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |