Spatiotemporal Action Recognition in Restaurant Videos

Spatiotemporal action recognition is the task of locating and classifying actions in videos. Our project applies this task to analyzing video footage of restaurant workers preparing food, for which potential applications include automated checkout and inventory management. Such videos are quite diff...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Gupta, Akshat, Desai, Milan, Liang, Wusheng, Kannan, Magesh
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Gupta, Akshat
Desai, Milan
Liang, Wusheng
Kannan, Magesh
description Spatiotemporal action recognition is the task of locating and classifying actions in videos. Our project applies this task to analyzing video footage of restaurant workers preparing food, for which potential applications include automated checkout and inventory management. Such videos are quite different from the standardized datasets that researchers are used to, as they involve small objects, rapid actions, and notoriously unbalanced data classes. We explore two approaches. The first approach involves the familiar object detector You Only Look Once, and another applying a recently proposed analogue for action recognition, You Only Watch Once. In the first, we design and implement a novel, recurrent modification of YOLO using convolutional LSTMs and explore the various subtleties in the training of such a network. In the second, we study the ability of YOWOs three dimensional convolutions to capture the spatiotemporal features of our unique dataset
doi_str_mv 10.48550/arxiv.2008.11149
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2008_11149</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2008_11149</sourcerecordid><originalsourceid>FETCH-LOGICAL-a679-dbedd7a1ac6d84260dbcf758d701cc7d93dd7bd5c3c760949d13701cb87f11f43</originalsourceid><addsrcrecordid>eNotj09rwkAUxPfSQ1E_QE_NF0jc5272z1GktoIgtKHX8LJvIwuahE2U-u1roqeZ4QfDDGNvwDNp8pwvMf6Fa7bi3GQAIO0rUz8dDqEd_LlrI56StbunJvn2rj02YfJhjP2Al4jNkPwG8m0_Zy81nnq_eOqMFduPYvOV7g-fu816n6LSNqXKE2kEdIqMXClOlat1bkhzcE6TFXdcUe6E04pbaQnEiCqja4Baihl7f9ROw8suhjPGWzkeKKcD4h-R1UE3</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Spatiotemporal Action Recognition in Restaurant Videos</title><source>arXiv.org</source><creator>Gupta, Akshat ; Desai, Milan ; Liang, Wusheng ; Kannan, Magesh</creator><creatorcontrib>Gupta, Akshat ; Desai, Milan ; Liang, Wusheng ; Kannan, Magesh</creatorcontrib><description>Spatiotemporal action recognition is the task of locating and classifying actions in videos. Our project applies this task to analyzing video footage of restaurant workers preparing food, for which potential applications include automated checkout and inventory management. Such videos are quite different from the standardized datasets that researchers are used to, as they involve small objects, rapid actions, and notoriously unbalanced data classes. We explore two approaches. The first approach involves the familiar object detector You Only Look Once, and another applying a recently proposed analogue for action recognition, You Only Watch Once. In the first, we design and implement a novel, recurrent modification of YOLO using convolutional LSTMs and explore the various subtleties in the training of such a network. In the second, we study the ability of YOWOs three dimensional convolutions to capture the spatiotemporal features of our unique dataset</description><identifier>DOI: 10.48550/arxiv.2008.11149</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Learning</subject><creationdate>2020-08</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2008.11149$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2008.11149$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Gupta, Akshat</creatorcontrib><creatorcontrib>Desai, Milan</creatorcontrib><creatorcontrib>Liang, Wusheng</creatorcontrib><creatorcontrib>Kannan, Magesh</creatorcontrib><title>Spatiotemporal Action Recognition in Restaurant Videos</title><description>Spatiotemporal action recognition is the task of locating and classifying actions in videos. Our project applies this task to analyzing video footage of restaurant workers preparing food, for which potential applications include automated checkout and inventory management. Such videos are quite different from the standardized datasets that researchers are used to, as they involve small objects, rapid actions, and notoriously unbalanced data classes. We explore two approaches. The first approach involves the familiar object detector You Only Look Once, and another applying a recently proposed analogue for action recognition, You Only Watch Once. In the first, we design and implement a novel, recurrent modification of YOLO using convolutional LSTMs and explore the various subtleties in the training of such a network. In the second, we study the ability of YOWOs three dimensional convolutions to capture the spatiotemporal features of our unique dataset</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj09rwkAUxPfSQ1E_QE_NF0jc5272z1GktoIgtKHX8LJvIwuahE2U-u1roqeZ4QfDDGNvwDNp8pwvMf6Fa7bi3GQAIO0rUz8dDqEd_LlrI56StbunJvn2rj02YfJhjP2Al4jNkPwG8m0_Zy81nnq_eOqMFduPYvOV7g-fu816n6LSNqXKE2kEdIqMXClOlat1bkhzcE6TFXdcUe6E04pbaQnEiCqja4Baihl7f9ROw8suhjPGWzkeKKcD4h-R1UE3</recordid><startdate>20200825</startdate><enddate>20200825</enddate><creator>Gupta, Akshat</creator><creator>Desai, Milan</creator><creator>Liang, Wusheng</creator><creator>Kannan, Magesh</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20200825</creationdate><title>Spatiotemporal Action Recognition in Restaurant Videos</title><author>Gupta, Akshat ; Desai, Milan ; Liang, Wusheng ; Kannan, Magesh</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a679-dbedd7a1ac6d84260dbcf758d701cc7d93dd7bd5c3c760949d13701cb87f11f43</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Gupta, Akshat</creatorcontrib><creatorcontrib>Desai, Milan</creatorcontrib><creatorcontrib>Liang, Wusheng</creatorcontrib><creatorcontrib>Kannan, Magesh</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Gupta, Akshat</au><au>Desai, Milan</au><au>Liang, Wusheng</au><au>Kannan, Magesh</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Spatiotemporal Action Recognition in Restaurant Videos</atitle><date>2020-08-25</date><risdate>2020</risdate><abstract>Spatiotemporal action recognition is the task of locating and classifying actions in videos. Our project applies this task to analyzing video footage of restaurant workers preparing food, for which potential applications include automated checkout and inventory management. Such videos are quite different from the standardized datasets that researchers are used to, as they involve small objects, rapid actions, and notoriously unbalanced data classes. We explore two approaches. The first approach involves the familiar object detector You Only Look Once, and another applying a recently proposed analogue for action recognition, You Only Watch Once. In the first, we design and implement a novel, recurrent modification of YOLO using convolutional LSTMs and explore the various subtleties in the training of such a network. In the second, we study the ability of YOWOs three dimensional convolutions to capture the spatiotemporal features of our unique dataset</abstract><doi>10.48550/arxiv.2008.11149</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2008.11149
ispartof
issn
language eng
recordid cdi_arxiv_primary_2008_11149
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computer Vision and Pattern Recognition
Computer Science - Learning
title Spatiotemporal Action Recognition in Restaurant Videos
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-26T04%3A07%3A16IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Spatiotemporal%20Action%20Recognition%20in%20Restaurant%20Videos&rft.au=Gupta,%20Akshat&rft.date=2020-08-25&rft_id=info:doi/10.48550/arxiv.2008.11149&rft_dat=%3Carxiv_GOX%3E2008_11149%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true