Contrastive Language-Action Pre-training for Temporal Localization
Long-form video understanding requires designing approaches that are able to temporally localize activities or language. End-to-end training for such tasks is limited by the compute device memory constraints and lack of temporal annotations at large-scale. These limitations can be addressed by pre-t...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Xu, Mengmeng Gundogdu, Erhan Lapin, Maksim Ghanem, Bernard Donoser, Michael Bazzani, Loris |
description | Long-form video understanding requires designing approaches that are able to
temporally localize activities or language. End-to-end training for such tasks
is limited by the compute device memory constraints and lack of temporal
annotations at large-scale. These limitations can be addressed by pre-training
on large datasets of temporally trimmed videos supervised by class annotations.
Once the video encoder is pre-trained, it is common practice to freeze it
during fine-tuning. Therefore, the video encoder does not learn temporal
boundaries and unseen classes, causing a domain gap with respect to the
downstream tasks. Moreover, using temporally trimmed videos prevents to capture
the relations between different action categories and the background context in
a video clip which results in limited generalization capacity. To address these
limitations, we propose a novel post-pre-training approach without freezing the
video encoder which leverages language. We introduce a masked contrastive
learning loss to capture visio-linguistic relations between activities,
background video clips and language in the form of captions. Our experiments
show that the proposed approach improves the state-of-the-art on temporal
action localization, few-shot temporal action localization, and video language
grounding tasks. |
doi_str_mv | 10.48550/arxiv.2204.12293 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2204_12293</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2204_12293</sourcerecordid><originalsourceid>FETCH-LOGICAL-a673-ad7022afc2017788d4a03ff36ac4f92bc3eeda7869a2679ae607d6be11779bd03</originalsourceid><addsrcrecordid>eNotj7FOwzAURb10QIUPYMI_4ODYqR2PbdQCUiQYskcv9nNkKbUrN1QtX09bmO5wj450CHkueVHVqxV_hXwOp0IIXhWlEEY-kE2T4pzhOIcT0hbi-A0jsrWdQ4r0KyO7niGGOFKfMu1wf0gZJtomC1P4gRv2SBYepiM-_e-SdLtt17yz9vPto1m3DJSWDJzmQoC3gpda17WrgEvvpQJbeSMGKxEd6FoZEEobQMW1UwOWV9oMjsslefnT3iP6Qw57yJf-FtPfY-QvAPxFIw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Contrastive Language-Action Pre-training for Temporal Localization</title><source>arXiv.org</source><creator>Xu, Mengmeng ; Gundogdu, Erhan ; Lapin, Maksim ; Ghanem, Bernard ; Donoser, Michael ; Bazzani, Loris</creator><creatorcontrib>Xu, Mengmeng ; Gundogdu, Erhan ; Lapin, Maksim ; Ghanem, Bernard ; Donoser, Michael ; Bazzani, Loris</creatorcontrib><description>Long-form video understanding requires designing approaches that are able to
temporally localize activities or language. End-to-end training for such tasks
is limited by the compute device memory constraints and lack of temporal
annotations at large-scale. These limitations can be addressed by pre-training
on large datasets of temporally trimmed videos supervised by class annotations.
Once the video encoder is pre-trained, it is common practice to freeze it
during fine-tuning. Therefore, the video encoder does not learn temporal
boundaries and unseen classes, causing a domain gap with respect to the
downstream tasks. Moreover, using temporally trimmed videos prevents to capture
the relations between different action categories and the background context in
a video clip which results in limited generalization capacity. To address these
limitations, we propose a novel post-pre-training approach without freezing the
video encoder which leverages language. We introduce a masked contrastive
learning loss to capture visio-linguistic relations between activities,
background video clips and language in the form of captions. Our experiments
show that the proposed approach improves the state-of-the-art on temporal
action localization, few-shot temporal action localization, and video language
grounding tasks.</description><identifier>DOI: 10.48550/arxiv.2204.12293</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2022-04</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2204.12293$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2204.12293$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Xu, Mengmeng</creatorcontrib><creatorcontrib>Gundogdu, Erhan</creatorcontrib><creatorcontrib>Lapin, Maksim</creatorcontrib><creatorcontrib>Ghanem, Bernard</creatorcontrib><creatorcontrib>Donoser, Michael</creatorcontrib><creatorcontrib>Bazzani, Loris</creatorcontrib><title>Contrastive Language-Action Pre-training for Temporal Localization</title><description>Long-form video understanding requires designing approaches that are able to
temporally localize activities or language. End-to-end training for such tasks
is limited by the compute device memory constraints and lack of temporal
annotations at large-scale. These limitations can be addressed by pre-training
on large datasets of temporally trimmed videos supervised by class annotations.
Once the video encoder is pre-trained, it is common practice to freeze it
during fine-tuning. Therefore, the video encoder does not learn temporal
boundaries and unseen classes, causing a domain gap with respect to the
downstream tasks. Moreover, using temporally trimmed videos prevents to capture
the relations between different action categories and the background context in
a video clip which results in limited generalization capacity. To address these
limitations, we propose a novel post-pre-training approach without freezing the
video encoder which leverages language. We introduce a masked contrastive
learning loss to capture visio-linguistic relations between activities,
background video clips and language in the form of captions. Our experiments
show that the proposed approach improves the state-of-the-art on temporal
action localization, few-shot temporal action localization, and video language
grounding tasks.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj7FOwzAURb10QIUPYMI_4ODYqR2PbdQCUiQYskcv9nNkKbUrN1QtX09bmO5wj450CHkueVHVqxV_hXwOp0IIXhWlEEY-kE2T4pzhOIcT0hbi-A0jsrWdQ4r0KyO7niGGOFKfMu1wf0gZJtomC1P4gRv2SBYepiM-_e-SdLtt17yz9vPto1m3DJSWDJzmQoC3gpda17WrgEvvpQJbeSMGKxEd6FoZEEobQMW1UwOWV9oMjsslefnT3iP6Qw57yJf-FtPfY-QvAPxFIw</recordid><startdate>20220426</startdate><enddate>20220426</enddate><creator>Xu, Mengmeng</creator><creator>Gundogdu, Erhan</creator><creator>Lapin, Maksim</creator><creator>Ghanem, Bernard</creator><creator>Donoser, Michael</creator><creator>Bazzani, Loris</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220426</creationdate><title>Contrastive Language-Action Pre-training for Temporal Localization</title><author>Xu, Mengmeng ; Gundogdu, Erhan ; Lapin, Maksim ; Ghanem, Bernard ; Donoser, Michael ; Bazzani, Loris</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a673-ad7022afc2017788d4a03ff36ac4f92bc3eeda7869a2679ae607d6be11779bd03</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Xu, Mengmeng</creatorcontrib><creatorcontrib>Gundogdu, Erhan</creatorcontrib><creatorcontrib>Lapin, Maksim</creatorcontrib><creatorcontrib>Ghanem, Bernard</creatorcontrib><creatorcontrib>Donoser, Michael</creatorcontrib><creatorcontrib>Bazzani, Loris</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Xu, Mengmeng</au><au>Gundogdu, Erhan</au><au>Lapin, Maksim</au><au>Ghanem, Bernard</au><au>Donoser, Michael</au><au>Bazzani, Loris</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Contrastive Language-Action Pre-training for Temporal Localization</atitle><date>2022-04-26</date><risdate>2022</risdate><abstract>Long-form video understanding requires designing approaches that are able to
temporally localize activities or language. End-to-end training for such tasks
is limited by the compute device memory constraints and lack of temporal
annotations at large-scale. These limitations can be addressed by pre-training
on large datasets of temporally trimmed videos supervised by class annotations.
Once the video encoder is pre-trained, it is common practice to freeze it
during fine-tuning. Therefore, the video encoder does not learn temporal
boundaries and unseen classes, causing a domain gap with respect to the
downstream tasks. Moreover, using temporally trimmed videos prevents to capture
the relations between different action categories and the background context in
a video clip which results in limited generalization capacity. To address these
limitations, we propose a novel post-pre-training approach without freezing the
video encoder which leverages language. We introduce a masked contrastive
learning loss to capture visio-linguistic relations between activities,
background video clips and language in the form of captions. Our experiments
show that the proposed approach improves the state-of-the-art on temporal
action localization, few-shot temporal action localization, and video language
grounding tasks.</abstract><doi>10.48550/arxiv.2204.12293</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2204.12293 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2204_12293 |
source | arXiv.org |
subjects | Computer Science - Computer Vision and Pattern Recognition |
title | Contrastive Language-Action Pre-training for Temporal Localization |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-24T16%3A20%3A47IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Contrastive%20Language-Action%20Pre-training%20for%20Temporal%20Localization&rft.au=Xu,%20Mengmeng&rft.date=2022-04-26&rft_id=info:doi/10.48550/arxiv.2204.12293&rft_dat=%3Carxiv_GOX%3E2204_12293%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |