MVMR: A New Framework for Evaluating Faithfulness of Video Moment Retrieval against Multiple Distractors

With the explosion of multimedia content, video moment retrieval (VMR), which aims to detect a video moment that matches a given text query from a video, has been studied intensively as a critical problem. However, the existing VMR framework evaluates video moment retrieval performance, assuming tha...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-08
Hauptverfasser: Yang, Nakyeong, Kim, Minsung, Yoon, Seunghyun, Shin, Joongbo, Jung, Kyomin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Yang, Nakyeong
Kim, Minsung
Yoon, Seunghyun
Shin, Joongbo
Jung, Kyomin
description With the explosion of multimedia content, video moment retrieval (VMR), which aims to detect a video moment that matches a given text query from a video, has been studied intensively as a critical problem. However, the existing VMR framework evaluates video moment retrieval performance, assuming that a video is given, which may not reveal whether the models exhibit overconfidence in the falsely given video. In this paper, we propose the MVMR (Massive Videos Moment Retrieval for Faithfulness Evaluation) task that aims to retrieve video moments within a massive video set, including multiple distractors, to evaluate the faithfulness of VMR models. For this task, we suggest an automated massive video pool construction framework to categorize negative (distractors) and positive (false-negative) video sets using textual and visual semantic distance verification methods. We extend existing VMR datasets using these methods and newly construct three practical MVMR datasets. To solve the task, we further propose a strong informative sample-weighted learning method, CroCs, which employs two contrastive learning mechanisms: (1) weakly-supervised potential negative learning and (2) cross-directional hard-negative learning. Experimental results on the MVMR datasets reveal that existing VMR models are easily distracted by the misinformation (distractors), whereas our model shows significantly robust performance, demonstrating that CroCs is essential to distinguishing positive moments against distractors. Our code and datasets are publicly available: https://github.com/yny0506/Massive-Videos-Moment-Retrieval.
doi_str_mv 10.48550/arxiv.2309.16701
format Article
fullrecord <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2309_16701</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2968595351</sourcerecordid><originalsourceid>FETCH-LOGICAL-a521-ed2a8e87b4b77a315c13a830dcf04a4175534058eb150a8228a5eaaf36b9391c3</originalsourceid><addsrcrecordid>eNotkM1OwkAURicmJhLkAVw5iWtwfjrt1B1BUBOqCSFsm9tyC4OlgzNT0Le3gqtvc_Ll5BByx9ko0kqxR3Df5jgSkqUjHieMX5GekJIPdSTEDRl4v2OMiTgRSske2WarbPFEx_QdT3TmYI8n6z5pZR2dHqFuIZhmQ2dgwrZq6wa9p7aiK7NGSzO7xybQBQZnsIMpbMA0PtCsrYM51EifjQ8OymCdvyXXFdQeB__bJ8vZdDl5Hc4_Xt4m4_kQlOBDXAvQqJMiKpIEJFcll6AlW5cViyDiSScdMaWx4IqBFkKDQoBKxkUqU17KPrm_3J4z5Adn9uB-8r8c-TlHRzxciIOzXy36kO9s65rOKRdprFWqpOLyF_xLYrU</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2968595351</pqid></control><display><type>article</type><title>MVMR: A New Framework for Evaluating Faithfulness of Video Moment Retrieval against Multiple Distractors</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Yang, Nakyeong ; Kim, Minsung ; Yoon, Seunghyun ; Shin, Joongbo ; Jung, Kyomin</creator><creatorcontrib>Yang, Nakyeong ; Kim, Minsung ; Yoon, Seunghyun ; Shin, Joongbo ; Jung, Kyomin</creatorcontrib><description>With the explosion of multimedia content, video moment retrieval (VMR), which aims to detect a video moment that matches a given text query from a video, has been studied intensively as a critical problem. However, the existing VMR framework evaluates video moment retrieval performance, assuming that a video is given, which may not reveal whether the models exhibit overconfidence in the falsely given video. In this paper, we propose the MVMR (Massive Videos Moment Retrieval for Faithfulness Evaluation) task that aims to retrieve video moments within a massive video set, including multiple distractors, to evaluate the faithfulness of VMR models. For this task, we suggest an automated massive video pool construction framework to categorize negative (distractors) and positive (false-negative) video sets using textual and visual semantic distance verification methods. We extend existing VMR datasets using these methods and newly construct three practical MVMR datasets. To solve the task, we further propose a strong informative sample-weighted learning method, CroCs, which employs two contrastive learning mechanisms: (1) weakly-supervised potential negative learning and (2) cross-directional hard-negative learning. Experimental results on the MVMR datasets reveal that existing VMR models are easily distracted by the misinformation (distractors), whereas our model shows significantly robust performance, demonstrating that CroCs is essential to distinguishing positive moments against distractors. Our code and datasets are publicly available: https://github.com/yny0506/Massive-Videos-Moment-Retrieval.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2309.16701</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language ; Computer Science - Computer Vision and Pattern Recognition ; Datasets ; Frames (data processing) ; Multimedia ; Natural language ; Query languages ; Retrieval ; Video</subject><ispartof>arXiv.org, 2024-08</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by-nc-sa/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://creativecommons.org/licenses/by-nc-sa/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,784,885,27925</link.rule.ids><backlink>$$Uhttps://doi.org/10.1145/3627673.3679838$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.48550/arXiv.2309.16701$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Yang, Nakyeong</creatorcontrib><creatorcontrib>Kim, Minsung</creatorcontrib><creatorcontrib>Yoon, Seunghyun</creatorcontrib><creatorcontrib>Shin, Joongbo</creatorcontrib><creatorcontrib>Jung, Kyomin</creatorcontrib><title>MVMR: A New Framework for Evaluating Faithfulness of Video Moment Retrieval against Multiple Distractors</title><title>arXiv.org</title><description>With the explosion of multimedia content, video moment retrieval (VMR), which aims to detect a video moment that matches a given text query from a video, has been studied intensively as a critical problem. However, the existing VMR framework evaluates video moment retrieval performance, assuming that a video is given, which may not reveal whether the models exhibit overconfidence in the falsely given video. In this paper, we propose the MVMR (Massive Videos Moment Retrieval for Faithfulness Evaluation) task that aims to retrieve video moments within a massive video set, including multiple distractors, to evaluate the faithfulness of VMR models. For this task, we suggest an automated massive video pool construction framework to categorize negative (distractors) and positive (false-negative) video sets using textual and visual semantic distance verification methods. We extend existing VMR datasets using these methods and newly construct three practical MVMR datasets. To solve the task, we further propose a strong informative sample-weighted learning method, CroCs, which employs two contrastive learning mechanisms: (1) weakly-supervised potential negative learning and (2) cross-directional hard-negative learning. Experimental results on the MVMR datasets reveal that existing VMR models are easily distracted by the misinformation (distractors), whereas our model shows significantly robust performance, demonstrating that CroCs is essential to distinguishing positive moments against distractors. Our code and datasets are publicly available: https://github.com/yny0506/Massive-Videos-Moment-Retrieval.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Datasets</subject><subject>Frames (data processing)</subject><subject>Multimedia</subject><subject>Natural language</subject><subject>Query languages</subject><subject>Retrieval</subject><subject>Video</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GOX</sourceid><recordid>eNotkM1OwkAURicmJhLkAVw5iWtwfjrt1B1BUBOqCSFsm9tyC4OlgzNT0Le3gqtvc_Ll5BByx9ko0kqxR3Df5jgSkqUjHieMX5GekJIPdSTEDRl4v2OMiTgRSske2WarbPFEx_QdT3TmYI8n6z5pZR2dHqFuIZhmQ2dgwrZq6wa9p7aiK7NGSzO7xybQBQZnsIMpbMA0PtCsrYM51EifjQ8OymCdvyXXFdQeB__bJ8vZdDl5Hc4_Xt4m4_kQlOBDXAvQqJMiKpIEJFcll6AlW5cViyDiSScdMaWx4IqBFkKDQoBKxkUqU17KPrm_3J4z5Adn9uB-8r8c-TlHRzxciIOzXy36kO9s65rOKRdprFWqpOLyF_xLYrU</recordid><startdate>20240809</startdate><enddate>20240809</enddate><creator>Yang, Nakyeong</creator><creator>Kim, Minsung</creator><creator>Yoon, Seunghyun</creator><creator>Shin, Joongbo</creator><creator>Jung, Kyomin</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240809</creationdate><title>MVMR: A New Framework for Evaluating Faithfulness of Video Moment Retrieval against Multiple Distractors</title><author>Yang, Nakyeong ; Kim, Minsung ; Yoon, Seunghyun ; Shin, Joongbo ; Jung, Kyomin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a521-ed2a8e87b4b77a315c13a830dcf04a4175534058eb150a8228a5eaaf36b9391c3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Datasets</topic><topic>Frames (data processing)</topic><topic>Multimedia</topic><topic>Natural language</topic><topic>Query languages</topic><topic>Retrieval</topic><topic>Video</topic><toplevel>online_resources</toplevel><creatorcontrib>Yang, Nakyeong</creatorcontrib><creatorcontrib>Kim, Minsung</creatorcontrib><creatorcontrib>Yoon, Seunghyun</creatorcontrib><creatorcontrib>Shin, Joongbo</creatorcontrib><creatorcontrib>Jung, Kyomin</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Yang, Nakyeong</au><au>Kim, Minsung</au><au>Yoon, Seunghyun</au><au>Shin, Joongbo</au><au>Jung, Kyomin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>MVMR: A New Framework for Evaluating Faithfulness of Video Moment Retrieval against Multiple Distractors</atitle><jtitle>arXiv.org</jtitle><date>2024-08-09</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>With the explosion of multimedia content, video moment retrieval (VMR), which aims to detect a video moment that matches a given text query from a video, has been studied intensively as a critical problem. However, the existing VMR framework evaluates video moment retrieval performance, assuming that a video is given, which may not reveal whether the models exhibit overconfidence in the falsely given video. In this paper, we propose the MVMR (Massive Videos Moment Retrieval for Faithfulness Evaluation) task that aims to retrieve video moments within a massive video set, including multiple distractors, to evaluate the faithfulness of VMR models. For this task, we suggest an automated massive video pool construction framework to categorize negative (distractors) and positive (false-negative) video sets using textual and visual semantic distance verification methods. We extend existing VMR datasets using these methods and newly construct three practical MVMR datasets. To solve the task, we further propose a strong informative sample-weighted learning method, CroCs, which employs two contrastive learning mechanisms: (1) weakly-supervised potential negative learning and (2) cross-directional hard-negative learning. Experimental results on the MVMR datasets reveal that existing VMR models are easily distracted by the misinformation (distractors), whereas our model shows significantly robust performance, demonstrating that CroCs is essential to distinguishing positive moments against distractors. Our code and datasets are publicly available: https://github.com/yny0506/Massive-Videos-Moment-Retrieval.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2309.16701</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-08
issn 2331-8422
language eng
recordid cdi_arxiv_primary_2309_16701
source arXiv.org; Free E- Journals
subjects Computer Science - Artificial Intelligence
Computer Science - Computation and Language
Computer Science - Computer Vision and Pattern Recognition
Datasets
Frames (data processing)
Multimedia
Natural language
Query languages
Retrieval
Video
title MVMR: A New Framework for Evaluating Faithfulness of Video Moment Retrieval against Multiple Distractors
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T06%3A04%3A20IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=MVMR:%20A%20New%20Framework%20for%20Evaluating%20Faithfulness%20of%20Video%20Moment%20Retrieval%20against%20Multiple%20Distractors&rft.jtitle=arXiv.org&rft.au=Yang,%20Nakyeong&rft.date=2024-08-09&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2309.16701&rft_dat=%3Cproquest_arxiv%3E2968595351%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2968595351&rft_id=info:pmid/&rfr_iscdi=true