What, when, and where? -- Self-Supervised Spatio-Temporal Grounding in Untrimmed Multi-Action Videos from Narrated Instructions
Spatio-temporal grounding describes the task of localizing events in space and time, e.g., in video data, based on verbal descriptions only. Models for this task are usually trained with human-annotated sentences and bounding box supervision. This work addresses this task from a multimodal supervisi...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Chen, Brian Shvetsova, Nina Rouditchenko, Andrew Kondermann, Daniel Thomas, Samuel Chang, Shih-Fu Feris, Rogerio Glass, James Kuehne, Hilde |
description | Spatio-temporal grounding describes the task of localizing events in space
and time, e.g., in video data, based on verbal descriptions only. Models for
this task are usually trained with human-annotated sentences and bounding box
supervision. This work addresses this task from a multimodal supervision
perspective, proposing a framework for spatio-temporal action grounding trained
on loose video and subtitle supervision only, without human annotation. To this
end, we combine local representation learning, which focuses on leveraging
fine-grained spatial information, with a global representation encoding that
captures higher-level representations and incorporates both in a joint
approach. To evaluate this challenging task in a real-life setting, a new
benchmark dataset is proposed providing dense spatio-temporal grounding
annotations in long, untrimmed, multi-action instructional videos for over 5K
events. We evaluate the proposed approach and other methods on the proposed and
standard downstream tasks showing that our method improves over current
baselines in various settings, including spatial, temporal, and untrimmed
multi-action spatio-temporal grounding. |
doi_str_mv | 10.48550/arxiv.2303.16990 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2303_16990</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2303_16990</sourcerecordid><originalsourceid>FETCH-LOGICAL-a670-147a7305201962bf0e58150e98fc48132cbd22cb20de88eddc59107341cc5f153</originalsourceid><addsrcrecordid>eNotkLtOwzAYhb0woMIDMOEHqIMvceJMqKqgVCowJMAYub5QS4kTOU6BiVdvGlj-8w-fjo4-AG4ITlLBOb6T4dsdE8owS0hWFPgS_H4cZFzCr4PxSyi9Pn_B3EOEYGkai8qxN-HoBqNh2cvoOlSZtu-CbOAmdKPXzn9C5-Gbj8G17YQ9j010aKUm1sN3p003QBu6Fr7IEGSciK0fYhhnYLgCF1Y2g7n-zwWoHh-q9RPavW6269UOySzHiKS5zBnmFJMio3uLDReEY1MIq1JBGFV7TadDsTZCGK0VLwjOWUqU4pZwtgC3f7WzgbqftsrwU59N1LMJdgLIv1mf</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>What, when, and where? -- Self-Supervised Spatio-Temporal Grounding in Untrimmed Multi-Action Videos from Narrated Instructions</title><source>arXiv.org</source><creator>Chen, Brian ; Shvetsova, Nina ; Rouditchenko, Andrew ; Kondermann, Daniel ; Thomas, Samuel ; Chang, Shih-Fu ; Feris, Rogerio ; Glass, James ; Kuehne, Hilde</creator><creatorcontrib>Chen, Brian ; Shvetsova, Nina ; Rouditchenko, Andrew ; Kondermann, Daniel ; Thomas, Samuel ; Chang, Shih-Fu ; Feris, Rogerio ; Glass, James ; Kuehne, Hilde</creatorcontrib><description>Spatio-temporal grounding describes the task of localizing events in space
and time, e.g., in video data, based on verbal descriptions only. Models for
this task are usually trained with human-annotated sentences and bounding box
supervision. This work addresses this task from a multimodal supervision
perspective, proposing a framework for spatio-temporal action grounding trained
on loose video and subtitle supervision only, without human annotation. To this
end, we combine local representation learning, which focuses on leveraging
fine-grained spatial information, with a global representation encoding that
captures higher-level representations and incorporates both in a joint
approach. To evaluate this challenging task in a real-life setting, a new
benchmark dataset is proposed providing dense spatio-temporal grounding
annotations in long, untrimmed, multi-action instructional videos for over 5K
events. We evaluate the proposed approach and other methods on the proposed and
standard downstream tasks showing that our method improves over current
baselines in various settings, including spatial, temporal, and untrimmed
multi-action spatio-temporal grounding.</description><identifier>DOI: 10.48550/arxiv.2303.16990</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2023-03</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2303.16990$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2303.16990$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Chen, Brian</creatorcontrib><creatorcontrib>Shvetsova, Nina</creatorcontrib><creatorcontrib>Rouditchenko, Andrew</creatorcontrib><creatorcontrib>Kondermann, Daniel</creatorcontrib><creatorcontrib>Thomas, Samuel</creatorcontrib><creatorcontrib>Chang, Shih-Fu</creatorcontrib><creatorcontrib>Feris, Rogerio</creatorcontrib><creatorcontrib>Glass, James</creatorcontrib><creatorcontrib>Kuehne, Hilde</creatorcontrib><title>What, when, and where? -- Self-Supervised Spatio-Temporal Grounding in Untrimmed Multi-Action Videos from Narrated Instructions</title><description>Spatio-temporal grounding describes the task of localizing events in space
and time, e.g., in video data, based on verbal descriptions only. Models for
this task are usually trained with human-annotated sentences and bounding box
supervision. This work addresses this task from a multimodal supervision
perspective, proposing a framework for spatio-temporal action grounding trained
on loose video and subtitle supervision only, without human annotation. To this
end, we combine local representation learning, which focuses on leveraging
fine-grained spatial information, with a global representation encoding that
captures higher-level representations and incorporates both in a joint
approach. To evaluate this challenging task in a real-life setting, a new
benchmark dataset is proposed providing dense spatio-temporal grounding
annotations in long, untrimmed, multi-action instructional videos for over 5K
events. We evaluate the proposed approach and other methods on the proposed and
standard downstream tasks showing that our method improves over current
baselines in various settings, including spatial, temporal, and untrimmed
multi-action spatio-temporal grounding.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotkLtOwzAYhb0woMIDMOEHqIMvceJMqKqgVCowJMAYub5QS4kTOU6BiVdvGlj-8w-fjo4-AG4ITlLBOb6T4dsdE8owS0hWFPgS_H4cZFzCr4PxSyi9Pn_B3EOEYGkai8qxN-HoBqNh2cvoOlSZtu-CbOAmdKPXzn9C5-Gbj8G17YQ9j010aKUm1sN3p003QBu6Fr7IEGSciK0fYhhnYLgCF1Y2g7n-zwWoHh-q9RPavW6269UOySzHiKS5zBnmFJMio3uLDReEY1MIq1JBGFV7TadDsTZCGK0VLwjOWUqU4pZwtgC3f7WzgbqftsrwU59N1LMJdgLIv1mf</recordid><startdate>20230329</startdate><enddate>20230329</enddate><creator>Chen, Brian</creator><creator>Shvetsova, Nina</creator><creator>Rouditchenko, Andrew</creator><creator>Kondermann, Daniel</creator><creator>Thomas, Samuel</creator><creator>Chang, Shih-Fu</creator><creator>Feris, Rogerio</creator><creator>Glass, James</creator><creator>Kuehne, Hilde</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230329</creationdate><title>What, when, and where? -- Self-Supervised Spatio-Temporal Grounding in Untrimmed Multi-Action Videos from Narrated Instructions</title><author>Chen, Brian ; Shvetsova, Nina ; Rouditchenko, Andrew ; Kondermann, Daniel ; Thomas, Samuel ; Chang, Shih-Fu ; Feris, Rogerio ; Glass, James ; Kuehne, Hilde</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a670-147a7305201962bf0e58150e98fc48132cbd22cb20de88eddc59107341cc5f153</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Chen, Brian</creatorcontrib><creatorcontrib>Shvetsova, Nina</creatorcontrib><creatorcontrib>Rouditchenko, Andrew</creatorcontrib><creatorcontrib>Kondermann, Daniel</creatorcontrib><creatorcontrib>Thomas, Samuel</creatorcontrib><creatorcontrib>Chang, Shih-Fu</creatorcontrib><creatorcontrib>Feris, Rogerio</creatorcontrib><creatorcontrib>Glass, James</creatorcontrib><creatorcontrib>Kuehne, Hilde</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Chen, Brian</au><au>Shvetsova, Nina</au><au>Rouditchenko, Andrew</au><au>Kondermann, Daniel</au><au>Thomas, Samuel</au><au>Chang, Shih-Fu</au><au>Feris, Rogerio</au><au>Glass, James</au><au>Kuehne, Hilde</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>What, when, and where? -- Self-Supervised Spatio-Temporal Grounding in Untrimmed Multi-Action Videos from Narrated Instructions</atitle><date>2023-03-29</date><risdate>2023</risdate><abstract>Spatio-temporal grounding describes the task of localizing events in space
and time, e.g., in video data, based on verbal descriptions only. Models for
this task are usually trained with human-annotated sentences and bounding box
supervision. This work addresses this task from a multimodal supervision
perspective, proposing a framework for spatio-temporal action grounding trained
on loose video and subtitle supervision only, without human annotation. To this
end, we combine local representation learning, which focuses on leveraging
fine-grained spatial information, with a global representation encoding that
captures higher-level representations and incorporates both in a joint
approach. To evaluate this challenging task in a real-life setting, a new
benchmark dataset is proposed providing dense spatio-temporal grounding
annotations in long, untrimmed, multi-action instructional videos for over 5K
events. We evaluate the proposed approach and other methods on the proposed and
standard downstream tasks showing that our method improves over current
baselines in various settings, including spatial, temporal, and untrimmed
multi-action spatio-temporal grounding.</abstract><doi>10.48550/arxiv.2303.16990</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2303.16990 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2303_16990 |
source | arXiv.org |
subjects | Computer Science - Computer Vision and Pattern Recognition |
title | What, when, and where? -- Self-Supervised Spatio-Temporal Grounding in Untrimmed Multi-Action Videos from Narrated Instructions |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-21T21%3A45%3A15IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=What,%20when,%20and%20where?%20--%20Self-Supervised%20Spatio-Temporal%20Grounding%20in%20Untrimmed%20Multi-Action%20Videos%20from%20Narrated%20Instructions&rft.au=Chen,%20Brian&rft.date=2023-03-29&rft_id=info:doi/10.48550/arxiv.2303.16990&rft_dat=%3Carxiv_GOX%3E2303_16990%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |