Multi-Moments in Time: Learning and Interpreting Models for Multi-Action Video Understanding

Videos capture events that typically contain multiple sequential, and simultaneous, actions even in the span of only a few seconds. However, most large-scale datasets built to train models for action recognition in video only provide a single label per video. Consequently, models can be incorrectly...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence 2022-12, Vol.44 (12), p.9434-9445
Hauptverfasser: Monfort, Mathew, Pan, Bowen, Ramakrishnan, Kandan, Andonian, Alex, McNamara, Barry A., Lascelles, Alex, Fan, Quanfu, Gutfreund, Dan, Feris, Rogerio Schmidt, Oliva, Aude
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 9445
container_issue 12
container_start_page 9434
container_title IEEE transactions on pattern analysis and machine intelligence
container_volume 44
creator Monfort, Mathew
Pan, Bowen
Ramakrishnan, Kandan
Andonian, Alex
McNamara, Barry A.
Lascelles, Alex
Fan, Quanfu
Gutfreund, Dan
Feris, Rogerio Schmidt
Oliva, Aude
description Videos capture events that typically contain multiple sequential, and simultaneous, actions even in the span of only a few seconds. However, most large-scale datasets built to train models for action recognition in video only provide a single label per video. Consequently, models can be incorrectly penalized for classifying actions that exist in the videos but are not explicitly labeled and do not learn the full spectrum of information present in each video in training. Towards this goal, we present the Multi-Moments in Time dataset (M-MiT) which includes over two million action labels for over one million three second videos. This multi-label dataset introduces novel challenges on how to train and analyze models for multi-action detection. Here, we present baseline results for multi-action recognition using loss functions adapted for long tail multi-label learning, provide improved methods for visualizing and interpreting models trained for multi-label action detection and show the strength of transferring models trained on M-MiT to smaller datasets.
doi_str_mv 10.1109/TPAMI.2021.3126682
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2734385590</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9609554</ieee_id><sourcerecordid>2734385590</sourcerecordid><originalsourceid>FETCH-LOGICAL-c328t-deefe5ba7c38d260229e5764626795e07a9017437846eea69270706713d8c2463</originalsourceid><addsrcrecordid>eNpdkE1PGzEQhq2qCELKHygXS71w2WCP11-9RYiPSIngkHBCsra7s5XRxg725sC_r0NQD5xGmnmemdFLyE_OZpwze71-mq8WM2DAZ4KDUga-kQlwxSoLFr6TCeMKKmPAnJHznF8Z47Vk4pSciVpLEEZNyMtqP4y-WsUthjFTH-jab_E3XWKTgg9_aRM6uggjpl3C8dBYxQ6HTPuY6NGdt6OPgT77DiPdhA5THotV2B_kpG-GjBefdUo2d7frm4dq-Xi_uJkvq1aAGasOsUf5p9GtMB0oBmBRalUrUNpKZLqxjOtaaFMrxEZZ0EwzpbnoTAu1ElNyddy7S_Ftj3l0W59bHIYmYNxnB9KqssGKuqC_vqCvcZ9C-c6BLnMjpWWFgiPVpphzwt7tkt826d1x5g7Zu4_s3SF795l9kS6PkkfE_0K5bKWsxT_Nf3xW</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2734385590</pqid></control><display><type>article</type><title>Multi-Moments in Time: Learning and Interpreting Models for Multi-Action Video Understanding</title><source>IEEE Electronic Library (IEL)</source><creator>Monfort, Mathew ; Pan, Bowen ; Ramakrishnan, Kandan ; Andonian, Alex ; McNamara, Barry A. ; Lascelles, Alex ; Fan, Quanfu ; Gutfreund, Dan ; Feris, Rogerio Schmidt ; Oliva, Aude</creator><creatorcontrib>Monfort, Mathew ; Pan, Bowen ; Ramakrishnan, Kandan ; Andonian, Alex ; McNamara, Barry A. ; Lascelles, Alex ; Fan, Quanfu ; Gutfreund, Dan ; Feris, Rogerio Schmidt ; Oliva, Aude</creatorcontrib><description>Videos capture events that typically contain multiple sequential, and simultaneous, actions even in the span of only a few seconds. However, most large-scale datasets built to train models for action recognition in video only provide a single label per video. Consequently, models can be incorrectly penalized for classifying actions that exist in the videos but are not explicitly labeled and do not learn the full spectrum of information present in each video in training. Towards this goal, we present the Multi-Moments in Time dataset (M-MiT) which includes over two million action labels for over one million three second videos. This multi-label dataset introduces novel challenges on how to train and analyze models for multi-action detection. Here, we present baseline results for multi-action recognition using loss functions adapted for long tail multi-label learning, provide improved methods for visualizing and interpreting models trained for multi-label action detection and show the strength of transferring models trained on M-MiT to smaller datasets.</description><identifier>ISSN: 0162-8828</identifier><identifier>EISSN: 2160-9292</identifier><identifier>EISSN: 1939-3539</identifier><identifier>DOI: 10.1109/TPAMI.2021.3126682</identifier><identifier>PMID: 34752386</identifier><identifier>CODEN: ITPIDJ</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Activity recognition ; Analytical models ; Annotations ; benchmarking ; Computer vision ; Convolutional neural networks ; Datasets ; Learning ; machine learning ; methods of data collection ; modeling from video ; multi-modal recognition ; neural nets ; Semantics ; Three-dimensional displays ; Training ; Video ; vision and scene understanding ; Visualization</subject><ispartof>IEEE transactions on pattern analysis and machine intelligence, 2022-12, Vol.44 (12), p.9434-9445</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c328t-deefe5ba7c38d260229e5764626795e07a9017437846eea69270706713d8c2463</citedby><cites>FETCH-LOGICAL-c328t-deefe5ba7c38d260229e5764626795e07a9017437846eea69270706713d8c2463</cites><orcidid>0000-0001-6373-5520 ; 0000-0001-5101-4443 ; 0000-0001-6399-0679</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9609554$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9609554$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Monfort, Mathew</creatorcontrib><creatorcontrib>Pan, Bowen</creatorcontrib><creatorcontrib>Ramakrishnan, Kandan</creatorcontrib><creatorcontrib>Andonian, Alex</creatorcontrib><creatorcontrib>McNamara, Barry A.</creatorcontrib><creatorcontrib>Lascelles, Alex</creatorcontrib><creatorcontrib>Fan, Quanfu</creatorcontrib><creatorcontrib>Gutfreund, Dan</creatorcontrib><creatorcontrib>Feris, Rogerio Schmidt</creatorcontrib><creatorcontrib>Oliva, Aude</creatorcontrib><title>Multi-Moments in Time: Learning and Interpreting Models for Multi-Action Video Understanding</title><title>IEEE transactions on pattern analysis and machine intelligence</title><addtitle>TPAMI</addtitle><description>Videos capture events that typically contain multiple sequential, and simultaneous, actions even in the span of only a few seconds. However, most large-scale datasets built to train models for action recognition in video only provide a single label per video. Consequently, models can be incorrectly penalized for classifying actions that exist in the videos but are not explicitly labeled and do not learn the full spectrum of information present in each video in training. Towards this goal, we present the Multi-Moments in Time dataset (M-MiT) which includes over two million action labels for over one million three second videos. This multi-label dataset introduces novel challenges on how to train and analyze models for multi-action detection. Here, we present baseline results for multi-action recognition using loss functions adapted for long tail multi-label learning, provide improved methods for visualizing and interpreting models trained for multi-label action detection and show the strength of transferring models trained on M-MiT to smaller datasets.</description><subject>Activity recognition</subject><subject>Analytical models</subject><subject>Annotations</subject><subject>benchmarking</subject><subject>Computer vision</subject><subject>Convolutional neural networks</subject><subject>Datasets</subject><subject>Learning</subject><subject>machine learning</subject><subject>methods of data collection</subject><subject>modeling from video</subject><subject>multi-modal recognition</subject><subject>neural nets</subject><subject>Semantics</subject><subject>Three-dimensional displays</subject><subject>Training</subject><subject>Video</subject><subject>vision and scene understanding</subject><subject>Visualization</subject><issn>0162-8828</issn><issn>2160-9292</issn><issn>1939-3539</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpdkE1PGzEQhq2qCELKHygXS71w2WCP11-9RYiPSIngkHBCsra7s5XRxg725sC_r0NQD5xGmnmemdFLyE_OZpwze71-mq8WM2DAZ4KDUga-kQlwxSoLFr6TCeMKKmPAnJHznF8Z47Vk4pSciVpLEEZNyMtqP4y-WsUthjFTH-jab_E3XWKTgg9_aRM6uggjpl3C8dBYxQ6HTPuY6NGdt6OPgT77DiPdhA5THotV2B_kpG-GjBefdUo2d7frm4dq-Xi_uJkvq1aAGasOsUf5p9GtMB0oBmBRalUrUNpKZLqxjOtaaFMrxEZZ0EwzpbnoTAu1ElNyddy7S_Ftj3l0W59bHIYmYNxnB9KqssGKuqC_vqCvcZ9C-c6BLnMjpWWFgiPVpphzwt7tkt826d1x5g7Zu4_s3SF795l9kS6PkkfE_0K5bKWsxT_Nf3xW</recordid><startdate>20221201</startdate><enddate>20221201</enddate><creator>Monfort, Mathew</creator><creator>Pan, Bowen</creator><creator>Ramakrishnan, Kandan</creator><creator>Andonian, Alex</creator><creator>McNamara, Barry A.</creator><creator>Lascelles, Alex</creator><creator>Fan, Quanfu</creator><creator>Gutfreund, Dan</creator><creator>Feris, Rogerio Schmidt</creator><creator>Oliva, Aude</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0001-6373-5520</orcidid><orcidid>https://orcid.org/0000-0001-5101-4443</orcidid><orcidid>https://orcid.org/0000-0001-6399-0679</orcidid></search><sort><creationdate>20221201</creationdate><title>Multi-Moments in Time: Learning and Interpreting Models for Multi-Action Video Understanding</title><author>Monfort, Mathew ; Pan, Bowen ; Ramakrishnan, Kandan ; Andonian, Alex ; McNamara, Barry A. ; Lascelles, Alex ; Fan, Quanfu ; Gutfreund, Dan ; Feris, Rogerio Schmidt ; Oliva, Aude</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c328t-deefe5ba7c38d260229e5764626795e07a9017437846eea69270706713d8c2463</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Activity recognition</topic><topic>Analytical models</topic><topic>Annotations</topic><topic>benchmarking</topic><topic>Computer vision</topic><topic>Convolutional neural networks</topic><topic>Datasets</topic><topic>Learning</topic><topic>machine learning</topic><topic>methods of data collection</topic><topic>modeling from video</topic><topic>multi-modal recognition</topic><topic>neural nets</topic><topic>Semantics</topic><topic>Three-dimensional displays</topic><topic>Training</topic><topic>Video</topic><topic>vision and scene understanding</topic><topic>Visualization</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Monfort, Mathew</creatorcontrib><creatorcontrib>Pan, Bowen</creatorcontrib><creatorcontrib>Ramakrishnan, Kandan</creatorcontrib><creatorcontrib>Andonian, Alex</creatorcontrib><creatorcontrib>McNamara, Barry A.</creatorcontrib><creatorcontrib>Lascelles, Alex</creatorcontrib><creatorcontrib>Fan, Quanfu</creatorcontrib><creatorcontrib>Gutfreund, Dan</creatorcontrib><creatorcontrib>Feris, Rogerio Schmidt</creatorcontrib><creatorcontrib>Oliva, Aude</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on pattern analysis and machine intelligence</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Monfort, Mathew</au><au>Pan, Bowen</au><au>Ramakrishnan, Kandan</au><au>Andonian, Alex</au><au>McNamara, Barry A.</au><au>Lascelles, Alex</au><au>Fan, Quanfu</au><au>Gutfreund, Dan</au><au>Feris, Rogerio Schmidt</au><au>Oliva, Aude</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Multi-Moments in Time: Learning and Interpreting Models for Multi-Action Video Understanding</atitle><jtitle>IEEE transactions on pattern analysis and machine intelligence</jtitle><stitle>TPAMI</stitle><date>2022-12-01</date><risdate>2022</risdate><volume>44</volume><issue>12</issue><spage>9434</spage><epage>9445</epage><pages>9434-9445</pages><issn>0162-8828</issn><eissn>2160-9292</eissn><eissn>1939-3539</eissn><coden>ITPIDJ</coden><abstract>Videos capture events that typically contain multiple sequential, and simultaneous, actions even in the span of only a few seconds. However, most large-scale datasets built to train models for action recognition in video only provide a single label per video. Consequently, models can be incorrectly penalized for classifying actions that exist in the videos but are not explicitly labeled and do not learn the full spectrum of information present in each video in training. Towards this goal, we present the Multi-Moments in Time dataset (M-MiT) which includes over two million action labels for over one million three second videos. This multi-label dataset introduces novel challenges on how to train and analyze models for multi-action detection. Here, we present baseline results for multi-action recognition using loss functions adapted for long tail multi-label learning, provide improved methods for visualizing and interpreting models trained for multi-label action detection and show the strength of transferring models trained on M-MiT to smaller datasets.</abstract><cop>New York</cop><pub>IEEE</pub><pmid>34752386</pmid><doi>10.1109/TPAMI.2021.3126682</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0001-6373-5520</orcidid><orcidid>https://orcid.org/0000-0001-5101-4443</orcidid><orcidid>https://orcid.org/0000-0001-6399-0679</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 0162-8828
ispartof IEEE transactions on pattern analysis and machine intelligence, 2022-12, Vol.44 (12), p.9434-9445
issn 0162-8828
2160-9292
1939-3539
language eng
recordid cdi_proquest_journals_2734385590
source IEEE Electronic Library (IEL)
subjects Activity recognition
Analytical models
Annotations
benchmarking
Computer vision
Convolutional neural networks
Datasets
Learning
machine learning
methods of data collection
modeling from video
multi-modal recognition
neural nets
Semantics
Three-dimensional displays
Training
Video
vision and scene understanding
Visualization
title Multi-Moments in Time: Learning and Interpreting Models for Multi-Action Video Understanding
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-29T11%3A06%3A00IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Multi-Moments%20in%20Time:%20Learning%20and%20Interpreting%20Models%20for%20Multi-Action%20Video%20Understanding&rft.jtitle=IEEE%20transactions%20on%20pattern%20analysis%20and%20machine%20intelligence&rft.au=Monfort,%20Mathew&rft.date=2022-12-01&rft.volume=44&rft.issue=12&rft.spage=9434&rft.epage=9445&rft.pages=9434-9445&rft.issn=0162-8828&rft.eissn=2160-9292&rft.coden=ITPIDJ&rft_id=info:doi/10.1109/TPAMI.2021.3126682&rft_dat=%3Cproquest_RIE%3E2734385590%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2734385590&rft_id=info:pmid/34752386&rft_ieee_id=9609554&rfr_iscdi=true