Video-Text Retrieval by Supervised Sparse Multi-Grained Learning
While recent progress in video-text retrieval has been advanced by the exploration of better representation learning, in this paper, we present a novel multi-grained sparse learning framework, S3MA, to learn an aligned sparse space shared between the video and the text for video-text retrieval. The...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Wang, Yimu Shi, Peng |
description | While recent progress in video-text retrieval has been advanced by the
exploration of better representation learning, in this paper, we present a
novel multi-grained sparse learning framework, S3MA, to learn an aligned sparse
space shared between the video and the text for video-text retrieval. The
shared sparse space is initialized with a finite number of sparse concepts,
each of which refers to a number of words. With the text data at hand, we learn
and update the shared sparse space in a supervised manner using the proposed
similarity and alignment losses. Moreover, to enable multi-grained alignment,
we incorporate frame representations for better modeling the video modality and
calculating fine-grained and coarse-grained similarities. Benefiting from the
learned shared sparse space and multi-grained similarities, extensive
experiments on several video-text retrieval benchmarks demonstrate the
superiority of S3MA over existing methods. Our code is available at
https://github.com/yimuwangcs/Better_Cross_Modal_Retrieval. |
doi_str_mv | 10.48550/arxiv.2302.09473 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2302_09473</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2302_09473</sourcerecordid><originalsourceid>FETCH-LOGICAL-a673-9221f33ed3b743504bef4f9fe2942649d6d0803ae2ed35b372259994526ee0363</originalsourceid><addsrcrecordid>eNotz1FLwzAUBeC8-CDTH-CT-QOp2b1JurwpQ6dQEVzZa7klNyNQa0m7sv175_TpwOFw4BPibqkLs7JWP1A-prkA1FBob0q8Fo-7FPhb1Xyc5CdPOfFMnWxPcnsYOM9p5CC3A-WR5fuhm5LaZEr9uayYcp_6_Y24itSNfPufC1G_PNfrV1V9bN7WT5UiV6LyAMuIyAHb0qDVpuVooo8M3oAzPrigVxqJ4TyxLZYA1ntvLDhmjQ4X4v7v9kJohpy-KJ-aX0pzoeAPwxhCsA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Video-Text Retrieval by Supervised Sparse Multi-Grained Learning</title><source>arXiv.org</source><creator>Wang, Yimu ; Shi, Peng</creator><creatorcontrib>Wang, Yimu ; Shi, Peng</creatorcontrib><description>While recent progress in video-text retrieval has been advanced by the
exploration of better representation learning, in this paper, we present a
novel multi-grained sparse learning framework, S3MA, to learn an aligned sparse
space shared between the video and the text for video-text retrieval. The
shared sparse space is initialized with a finite number of sparse concepts,
each of which refers to a number of words. With the text data at hand, we learn
and update the shared sparse space in a supervised manner using the proposed
similarity and alignment losses. Moreover, to enable multi-grained alignment,
we incorporate frame representations for better modeling the video modality and
calculating fine-grained and coarse-grained similarities. Benefiting from the
learned shared sparse space and multi-grained similarities, extensive
experiments on several video-text retrieval benchmarks demonstrate the
superiority of S3MA over existing methods. Our code is available at
https://github.com/yimuwangcs/Better_Cross_Modal_Retrieval.</description><identifier>DOI: 10.48550/arxiv.2302.09473</identifier><language>eng</language><subject>Computer Science - Computation and Language ; Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Information Retrieval ; Computer Science - Learning ; Computer Science - Multimedia</subject><creationdate>2023-02</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2302.09473$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2302.09473$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Wang, Yimu</creatorcontrib><creatorcontrib>Shi, Peng</creatorcontrib><title>Video-Text Retrieval by Supervised Sparse Multi-Grained Learning</title><description>While recent progress in video-text retrieval has been advanced by the
exploration of better representation learning, in this paper, we present a
novel multi-grained sparse learning framework, S3MA, to learn an aligned sparse
space shared between the video and the text for video-text retrieval. The
shared sparse space is initialized with a finite number of sparse concepts,
each of which refers to a number of words. With the text data at hand, we learn
and update the shared sparse space in a supervised manner using the proposed
similarity and alignment losses. Moreover, to enable multi-grained alignment,
we incorporate frame representations for better modeling the video modality and
calculating fine-grained and coarse-grained similarities. Benefiting from the
learned shared sparse space and multi-grained similarities, extensive
experiments on several video-text retrieval benchmarks demonstrate the
superiority of S3MA over existing methods. Our code is available at
https://github.com/yimuwangcs/Better_Cross_Modal_Retrieval.</description><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Information Retrieval</subject><subject>Computer Science - Learning</subject><subject>Computer Science - Multimedia</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz1FLwzAUBeC8-CDTH-CT-QOp2b1JurwpQ6dQEVzZa7klNyNQa0m7sv175_TpwOFw4BPibqkLs7JWP1A-prkA1FBob0q8Fo-7FPhb1Xyc5CdPOfFMnWxPcnsYOM9p5CC3A-WR5fuhm5LaZEr9uayYcp_6_Y24itSNfPufC1G_PNfrV1V9bN7WT5UiV6LyAMuIyAHb0qDVpuVooo8M3oAzPrigVxqJ4TyxLZYA1ntvLDhmjQ4X4v7v9kJohpy-KJ-aX0pzoeAPwxhCsA</recordid><startdate>20230218</startdate><enddate>20230218</enddate><creator>Wang, Yimu</creator><creator>Shi, Peng</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230218</creationdate><title>Video-Text Retrieval by Supervised Sparse Multi-Grained Learning</title><author>Wang, Yimu ; Shi, Peng</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a673-9221f33ed3b743504bef4f9fe2942649d6d0803ae2ed35b372259994526ee0363</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Information Retrieval</topic><topic>Computer Science - Learning</topic><topic>Computer Science - Multimedia</topic><toplevel>online_resources</toplevel><creatorcontrib>Wang, Yimu</creatorcontrib><creatorcontrib>Shi, Peng</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Wang, Yimu</au><au>Shi, Peng</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Video-Text Retrieval by Supervised Sparse Multi-Grained Learning</atitle><date>2023-02-18</date><risdate>2023</risdate><abstract>While recent progress in video-text retrieval has been advanced by the
exploration of better representation learning, in this paper, we present a
novel multi-grained sparse learning framework, S3MA, to learn an aligned sparse
space shared between the video and the text for video-text retrieval. The
shared sparse space is initialized with a finite number of sparse concepts,
each of which refers to a number of words. With the text data at hand, we learn
and update the shared sparse space in a supervised manner using the proposed
similarity and alignment losses. Moreover, to enable multi-grained alignment,
we incorporate frame representations for better modeling the video modality and
calculating fine-grained and coarse-grained similarities. Benefiting from the
learned shared sparse space and multi-grained similarities, extensive
experiments on several video-text retrieval benchmarks demonstrate the
superiority of S3MA over existing methods. Our code is available at
https://github.com/yimuwangcs/Better_Cross_Modal_Retrieval.</abstract><doi>10.48550/arxiv.2302.09473</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2302.09473 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2302_09473 |
source | arXiv.org |
subjects | Computer Science - Computation and Language Computer Science - Computer Vision and Pattern Recognition Computer Science - Information Retrieval Computer Science - Learning Computer Science - Multimedia |
title | Video-Text Retrieval by Supervised Sparse Multi-Grained Learning |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-30T18%3A39%3A51IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Video-Text%20Retrieval%20by%20Supervised%20Sparse%20Multi-Grained%20Learning&rft.au=Wang,%20Yimu&rft.date=2023-02-18&rft_id=info:doi/10.48550/arxiv.2302.09473&rft_dat=%3Carxiv_GOX%3E2302_09473%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |