Second-order Temporal Pooling for Action Recognition

Deep learning models for video-based action recognition usually generate features for short clips (consisting of a few frames); such clip-level features are aggregated to video-level representations by computing statistics on these features. Typically zero-th (max) or the first-order (average) stati...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:International journal of computer vision 2019-04, Vol.127 (4), p.340-362
Hauptverfasser: Cherian, Anoop, Gould, Stephen
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 362
container_issue 4
container_start_page 340
container_title International journal of computer vision
container_volume 127
creator Cherian, Anoop
Gould, Stephen
description Deep learning models for video-based action recognition usually generate features for short clips (consisting of a few frames); such clip-level features are aggregated to video-level representations by computing statistics on these features. Typically zero-th (max) or the first-order (average) statistics are used. In this paper, we explore the benefits of using second-order statistics.Specifically, we propose a novel end-to-end learnable feature aggregation scheme, dubbed temporal correlation pooling that generates an action descriptor for a video sequence by capturing the similarities between the temporal evolution of clip-level CNN features computed across the video. Such a descriptor, while being computationally cheap, also naturally encodes the co-activations of multiple CNN features, thereby providing a richer characterization of actions than their first-order counterparts. We also propose higher-order extensions of this scheme by computing correlations after embedding the CNN features in a reproducing kernel Hilbert space. We provide experiments on benchmark datasets such as HMDB-51 and UCF-101, fine-grained datasets such as MPII Cooking activities and JHMDB, as well as the recent Kinetics-600. Our results demonstrate the advantages of higher-order pooling schemes that when combined with hand-crafted features (as is standard practice) achieves state-of-the-art accuracy.
doi_str_mv 10.1007/s11263-018-1111-5
format Article
fullrecord <record><control><sourceid>gale_proqu</sourceid><recordid>TN_cdi_proquest_journals_2090560412</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A576759783</galeid><sourcerecordid>A576759783</sourcerecordid><originalsourceid>FETCH-LOGICAL-c389t-92e76eedc3b5394a785848529c750141457b8da5555ac16254c13ab66de244e73</originalsourceid><addsrcrecordid>eNp1kMtKAzEUhoMoWKsP4G7AlYvU3JNZluKlUFDaug5p5swwpZ3UZAr69qaMIC48WeQQvi85-RG6pWRCCdEPiVKmOCbUYJoLyzM0olJzTAWR52hESkawVCW9RFcpbQkhzDA-QmIFPnQVDrGCWKxhfwjR7Yq3EHZt1xR1iMXU923oimUGm6499dfoona7BDc_-xi9Pz2uZy948fo8n00X2HNT9rhkoBVA5flG8lI4baQRRrLSa0mooELqjamczOU8VUwKT7nbKFUBEwI0H6O74d5DDB9HSL3dhmPs8pOWkZJIRQRlmZoMVON2YNuuDn10Pq8K9m3-HNRtPp9KrbQsteFZuP8jZKaHz75xx5TsfLX8y9KB9TGkFKG2h9juXfyylNhT8nZI3ubk7Sl5K7PDBidltmsg_o79v_QN9f-CWw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2090560412</pqid></control><display><type>article</type><title>Second-order Temporal Pooling for Action Recognition</title><source>SpringerLink Journals - AutoHoldings</source><creator>Cherian, Anoop ; Gould, Stephen</creator><creatorcontrib>Cherian, Anoop ; Gould, Stephen</creatorcontrib><description>Deep learning models for video-based action recognition usually generate features for short clips (consisting of a few frames); such clip-level features are aggregated to video-level representations by computing statistics on these features. Typically zero-th (max) or the first-order (average) statistics are used. In this paper, we explore the benefits of using second-order statistics.Specifically, we propose a novel end-to-end learnable feature aggregation scheme, dubbed temporal correlation pooling that generates an action descriptor for a video sequence by capturing the similarities between the temporal evolution of clip-level CNN features computed across the video. Such a descriptor, while being computationally cheap, also naturally encodes the co-activations of multiple CNN features, thereby providing a richer characterization of actions than their first-order counterparts. We also propose higher-order extensions of this scheme by computing correlations after embedding the CNN features in a reproducing kernel Hilbert space. We provide experiments on benchmark datasets such as HMDB-51 and UCF-101, fine-grained datasets such as MPII Cooking activities and JHMDB, as well as the recent Kinetics-600. Our results demonstrate the advantages of higher-order pooling schemes that when combined with hand-crafted features (as is standard practice) achieves state-of-the-art accuracy.</description><identifier>ISSN: 0920-5691</identifier><identifier>EISSN: 1573-1405</identifier><identifier>DOI: 10.1007/s11263-018-1111-5</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Artificial Intelligence ; Benchmarking ; Computation ; Computer Imaging ; Computer Science ; Cooking ; Datasets ; Feature recognition ; Hilbert space ; Image Processing and Computer Vision ; Machine learning ; Novels ; Pattern Recognition ; Pattern Recognition and Graphics ; Retirement benefits ; Statistics ; Vision</subject><ispartof>International journal of computer vision, 2019-04, Vol.127 (4), p.340-362</ispartof><rights>Springer Science+Business Media, LLC, part of Springer Nature 2018</rights><rights>COPYRIGHT 2019 Springer</rights><rights>International Journal of Computer Vision is a copyright of Springer, (2018). All Rights Reserved.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c389t-92e76eedc3b5394a785848529c750141457b8da5555ac16254c13ab66de244e73</citedby><cites>FETCH-LOGICAL-c389t-92e76eedc3b5394a785848529c750141457b8da5555ac16254c13ab66de244e73</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11263-018-1111-5$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11263-018-1111-5$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,780,784,27923,27924,41487,42556,51318</link.rule.ids></links><search><creatorcontrib>Cherian, Anoop</creatorcontrib><creatorcontrib>Gould, Stephen</creatorcontrib><title>Second-order Temporal Pooling for Action Recognition</title><title>International journal of computer vision</title><addtitle>Int J Comput Vis</addtitle><description>Deep learning models for video-based action recognition usually generate features for short clips (consisting of a few frames); such clip-level features are aggregated to video-level representations by computing statistics on these features. Typically zero-th (max) or the first-order (average) statistics are used. In this paper, we explore the benefits of using second-order statistics.Specifically, we propose a novel end-to-end learnable feature aggregation scheme, dubbed temporal correlation pooling that generates an action descriptor for a video sequence by capturing the similarities between the temporal evolution of clip-level CNN features computed across the video. Such a descriptor, while being computationally cheap, also naturally encodes the co-activations of multiple CNN features, thereby providing a richer characterization of actions than their first-order counterparts. We also propose higher-order extensions of this scheme by computing correlations after embedding the CNN features in a reproducing kernel Hilbert space. We provide experiments on benchmark datasets such as HMDB-51 and UCF-101, fine-grained datasets such as MPII Cooking activities and JHMDB, as well as the recent Kinetics-600. Our results demonstrate the advantages of higher-order pooling schemes that when combined with hand-crafted features (as is standard practice) achieves state-of-the-art accuracy.</description><subject>Artificial Intelligence</subject><subject>Benchmarking</subject><subject>Computation</subject><subject>Computer Imaging</subject><subject>Computer Science</subject><subject>Cooking</subject><subject>Datasets</subject><subject>Feature recognition</subject><subject>Hilbert space</subject><subject>Image Processing and Computer Vision</subject><subject>Machine learning</subject><subject>Novels</subject><subject>Pattern Recognition</subject><subject>Pattern Recognition and Graphics</subject><subject>Retirement benefits</subject><subject>Statistics</subject><subject>Vision</subject><issn>0920-5691</issn><issn>1573-1405</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><recordid>eNp1kMtKAzEUhoMoWKsP4G7AlYvU3JNZluKlUFDaug5p5swwpZ3UZAr69qaMIC48WeQQvi85-RG6pWRCCdEPiVKmOCbUYJoLyzM0olJzTAWR52hESkawVCW9RFcpbQkhzDA-QmIFPnQVDrGCWKxhfwjR7Yq3EHZt1xR1iMXU923oimUGm6499dfoona7BDc_-xi9Pz2uZy948fo8n00X2HNT9rhkoBVA5flG8lI4baQRRrLSa0mooELqjamczOU8VUwKT7nbKFUBEwI0H6O74d5DDB9HSL3dhmPs8pOWkZJIRQRlmZoMVON2YNuuDn10Pq8K9m3-HNRtPp9KrbQsteFZuP8jZKaHz75xx5TsfLX8y9KB9TGkFKG2h9juXfyylNhT8nZI3ubk7Sl5K7PDBidltmsg_o79v_QN9f-CWw</recordid><startdate>20190401</startdate><enddate>20190401</enddate><creator>Cherian, Anoop</creator><creator>Gould, Stephen</creator><general>Springer US</general><general>Springer</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>ISR</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PYYUZ</scope><scope>Q9U</scope></search><sort><creationdate>20190401</creationdate><title>Second-order Temporal Pooling for Action Recognition</title><author>Cherian, Anoop ; Gould, Stephen</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c389t-92e76eedc3b5394a785848529c750141457b8da5555ac16254c13ab66de244e73</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Artificial Intelligence</topic><topic>Benchmarking</topic><topic>Computation</topic><topic>Computer Imaging</topic><topic>Computer Science</topic><topic>Cooking</topic><topic>Datasets</topic><topic>Feature recognition</topic><topic>Hilbert space</topic><topic>Image Processing and Computer Vision</topic><topic>Machine learning</topic><topic>Novels</topic><topic>Pattern Recognition</topic><topic>Pattern Recognition and Graphics</topic><topic>Retirement benefits</topic><topic>Statistics</topic><topic>Vision</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Cherian, Anoop</creatorcontrib><creatorcontrib>Gould, Stephen</creatorcontrib><collection>CrossRef</collection><collection>Gale In Context: Science</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ABI/INFORM Collection China</collection><collection>ProQuest Central Basic</collection><jtitle>International journal of computer vision</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Cherian, Anoop</au><au>Gould, Stephen</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Second-order Temporal Pooling for Action Recognition</atitle><jtitle>International journal of computer vision</jtitle><stitle>Int J Comput Vis</stitle><date>2019-04-01</date><risdate>2019</risdate><volume>127</volume><issue>4</issue><spage>340</spage><epage>362</epage><pages>340-362</pages><issn>0920-5691</issn><eissn>1573-1405</eissn><abstract>Deep learning models for video-based action recognition usually generate features for short clips (consisting of a few frames); such clip-level features are aggregated to video-level representations by computing statistics on these features. Typically zero-th (max) or the first-order (average) statistics are used. In this paper, we explore the benefits of using second-order statistics.Specifically, we propose a novel end-to-end learnable feature aggregation scheme, dubbed temporal correlation pooling that generates an action descriptor for a video sequence by capturing the similarities between the temporal evolution of clip-level CNN features computed across the video. Such a descriptor, while being computationally cheap, also naturally encodes the co-activations of multiple CNN features, thereby providing a richer characterization of actions than their first-order counterparts. We also propose higher-order extensions of this scheme by computing correlations after embedding the CNN features in a reproducing kernel Hilbert space. We provide experiments on benchmark datasets such as HMDB-51 and UCF-101, fine-grained datasets such as MPII Cooking activities and JHMDB, as well as the recent Kinetics-600. Our results demonstrate the advantages of higher-order pooling schemes that when combined with hand-crafted features (as is standard practice) achieves state-of-the-art accuracy.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s11263-018-1111-5</doi><tpages>23</tpages></addata></record>
fulltext fulltext
identifier ISSN: 0920-5691
ispartof International journal of computer vision, 2019-04, Vol.127 (4), p.340-362
issn 0920-5691
1573-1405
language eng
recordid cdi_proquest_journals_2090560412
source SpringerLink Journals - AutoHoldings
subjects Artificial Intelligence
Benchmarking
Computation
Computer Imaging
Computer Science
Cooking
Datasets
Feature recognition
Hilbert space
Image Processing and Computer Vision
Machine learning
Novels
Pattern Recognition
Pattern Recognition and Graphics
Retirement benefits
Statistics
Vision
title Second-order Temporal Pooling for Action Recognition
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-12T07%3A18%3A53IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_proqu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Second-order%20Temporal%20Pooling%20for%20Action%20Recognition&rft.jtitle=International%20journal%20of%20computer%20vision&rft.au=Cherian,%20Anoop&rft.date=2019-04-01&rft.volume=127&rft.issue=4&rft.spage=340&rft.epage=362&rft.pages=340-362&rft.issn=0920-5691&rft.eissn=1573-1405&rft_id=info:doi/10.1007/s11263-018-1111-5&rft_dat=%3Cgale_proqu%3EA576759783%3C/gale_proqu%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2090560412&rft_id=info:pmid/&rft_galeid=A576759783&rfr_iscdi=true