VVS: Video-to-Video Retrieval with Irrelevant Frame Suppression
In content-based video retrieval (CBVR), dealing with large-scale collections, efficiency is as important as accuracy; thus, several video-level feature-based studies have actively been conducted. Nevertheless, owing to the severe difficulty of embedding a lengthy and untrimmed video into a single f...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2023-12 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Won, Jo Lim, Geuntaek Lee, Gwangjin Kim, Hyunwoo Ko, Byungsoo Choi, Yukyung |
description | In content-based video retrieval (CBVR), dealing with large-scale collections, efficiency is as important as accuracy; thus, several video-level feature-based studies have actively been conducted. Nevertheless, owing to the severe difficulty of embedding a lengthy and untrimmed video into a single feature, these studies have been insufficient for accurate retrieval compared to frame-level feature-based studies. In this paper, we show that appropriate suppression of irrelevant frames can provide insight into the current obstacles of the video-level approaches. Furthermore, we propose a Video-to-Video Suppression network (VVS) as a solution. VVS is an end-to-end framework that consists of an easy distractor elimination stage to identify which frames to remove and a suppression weight generation stage to determine the extent to suppress the remaining frames. This structure is intended to effectively describe an untrimmed video with varying content and meaningless information. Its efficacy is proved via extensive experiments, and we show that our approach is not only state-of-the-art in video-level approaches but also has a fast inference time despite possessing retrieval capabilities close to those of frame-level approaches. Code is available at https://github.com/sejong-rcv/VVS |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2787736593</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2787736593</sourcerecordid><originalsourceid>FETCH-proquest_journals_27877365933</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mSwDwsLtlIIy0xJzdctydcFMxSCUkuKMlPLEnMUyjNLMhQ8i4pSc4DcvBIFt6LE3FSF4NKCgqLU4uLM_DweBta0xJziVF4ozc2g7OYa4uyhW1CUX1iaWlwSn5VfWpQHlIo3MrcwNzc2M7U0NiZOFQC3kTh2</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2787736593</pqid></control><display><type>article</type><title>VVS: Video-to-Video Retrieval with Irrelevant Frame Suppression</title><source>Free E- Journals</source><creator>Won, Jo ; Lim, Geuntaek ; Lee, Gwangjin ; Kim, Hyunwoo ; Ko, Byungsoo ; Choi, Yukyung</creator><creatorcontrib>Won, Jo ; Lim, Geuntaek ; Lee, Gwangjin ; Kim, Hyunwoo ; Ko, Byungsoo ; Choi, Yukyung</creatorcontrib><description>In content-based video retrieval (CBVR), dealing with large-scale collections, efficiency is as important as accuracy; thus, several video-level feature-based studies have actively been conducted. Nevertheless, owing to the severe difficulty of embedding a lengthy and untrimmed video into a single feature, these studies have been insufficient for accurate retrieval compared to frame-level feature-based studies. In this paper, we show that appropriate suppression of irrelevant frames can provide insight into the current obstacles of the video-level approaches. Furthermore, we propose a Video-to-Video Suppression network (VVS) as a solution. VVS is an end-to-end framework that consists of an easy distractor elimination stage to identify which frames to remove and a suppression weight generation stage to determine the extent to suppress the remaining frames. This structure is intended to effectively describe an untrimmed video with varying content and meaningless information. Its efficacy is proved via extensive experiments, and we show that our approach is not only state-of-the-art in video-level approaches but also has a fast inference time despite possessing retrieval capabilities close to those of frame-level approaches. Code is available at https://github.com/sejong-rcv/VVS</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Frames (data processing) ; Retrieval</subject><ispartof>arXiv.org, 2023-12</ispartof><rights>2023. This work is published under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Won, Jo</creatorcontrib><creatorcontrib>Lim, Geuntaek</creatorcontrib><creatorcontrib>Lee, Gwangjin</creatorcontrib><creatorcontrib>Kim, Hyunwoo</creatorcontrib><creatorcontrib>Ko, Byungsoo</creatorcontrib><creatorcontrib>Choi, Yukyung</creatorcontrib><title>VVS: Video-to-Video Retrieval with Irrelevant Frame Suppression</title><title>arXiv.org</title><description>In content-based video retrieval (CBVR), dealing with large-scale collections, efficiency is as important as accuracy; thus, several video-level feature-based studies have actively been conducted. Nevertheless, owing to the severe difficulty of embedding a lengthy and untrimmed video into a single feature, these studies have been insufficient for accurate retrieval compared to frame-level feature-based studies. In this paper, we show that appropriate suppression of irrelevant frames can provide insight into the current obstacles of the video-level approaches. Furthermore, we propose a Video-to-Video Suppression network (VVS) as a solution. VVS is an end-to-end framework that consists of an easy distractor elimination stage to identify which frames to remove and a suppression weight generation stage to determine the extent to suppress the remaining frames. This structure is intended to effectively describe an untrimmed video with varying content and meaningless information. Its efficacy is proved via extensive experiments, and we show that our approach is not only state-of-the-art in video-level approaches but also has a fast inference time despite possessing retrieval capabilities close to those of frame-level approaches. Code is available at https://github.com/sejong-rcv/VVS</description><subject>Frames (data processing)</subject><subject>Retrieval</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mSwDwsLtlIIy0xJzdctydcFMxSCUkuKMlPLEnMUyjNLMhQ8i4pSc4DcvBIFt6LE3FSF4NKCgqLU4uLM_DweBta0xJziVF4ozc2g7OYa4uyhW1CUX1iaWlwSn5VfWpQHlIo3MrcwNzc2M7U0NiZOFQC3kTh2</recordid><startdate>20231219</startdate><enddate>20231219</enddate><creator>Won, Jo</creator><creator>Lim, Geuntaek</creator><creator>Lee, Gwangjin</creator><creator>Kim, Hyunwoo</creator><creator>Ko, Byungsoo</creator><creator>Choi, Yukyung</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20231219</creationdate><title>VVS: Video-to-Video Retrieval with Irrelevant Frame Suppression</title><author>Won, Jo ; Lim, Geuntaek ; Lee, Gwangjin ; Kim, Hyunwoo ; Ko, Byungsoo ; Choi, Yukyung</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_27877365933</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Frames (data processing)</topic><topic>Retrieval</topic><toplevel>online_resources</toplevel><creatorcontrib>Won, Jo</creatorcontrib><creatorcontrib>Lim, Geuntaek</creatorcontrib><creatorcontrib>Lee, Gwangjin</creatorcontrib><creatorcontrib>Kim, Hyunwoo</creatorcontrib><creatorcontrib>Ko, Byungsoo</creatorcontrib><creatorcontrib>Choi, Yukyung</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>Proquest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Won, Jo</au><au>Lim, Geuntaek</au><au>Lee, Gwangjin</au><au>Kim, Hyunwoo</au><au>Ko, Byungsoo</au><au>Choi, Yukyung</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>VVS: Video-to-Video Retrieval with Irrelevant Frame Suppression</atitle><jtitle>arXiv.org</jtitle><date>2023-12-19</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>In content-based video retrieval (CBVR), dealing with large-scale collections, efficiency is as important as accuracy; thus, several video-level feature-based studies have actively been conducted. Nevertheless, owing to the severe difficulty of embedding a lengthy and untrimmed video into a single feature, these studies have been insufficient for accurate retrieval compared to frame-level feature-based studies. In this paper, we show that appropriate suppression of irrelevant frames can provide insight into the current obstacles of the video-level approaches. Furthermore, we propose a Video-to-Video Suppression network (VVS) as a solution. VVS is an end-to-end framework that consists of an easy distractor elimination stage to identify which frames to remove and a suppression weight generation stage to determine the extent to suppress the remaining frames. This structure is intended to effectively describe an untrimmed video with varying content and meaningless information. Its efficacy is proved via extensive experiments, and we show that our approach is not only state-of-the-art in video-level approaches but also has a fast inference time despite possessing retrieval capabilities close to those of frame-level approaches. Code is available at https://github.com/sejong-rcv/VVS</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2023-12 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2787736593 |
source | Free E- Journals |
subjects | Frames (data processing) Retrieval |
title | VVS: Video-to-Video Retrieval with Irrelevant Frame Suppression |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-19T17%3A00%3A49IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=VVS:%20Video-to-Video%20Retrieval%20with%20Irrelevant%20Frame%20Suppression&rft.jtitle=arXiv.org&rft.au=Won,%20Jo&rft.date=2023-12-19&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2787736593%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2787736593&rft_id=info:pmid/&rfr_iscdi=true |