Multimodal Contextualized Support for Enhancing Video Retrieval System
Current video retrieval systems, especially those used in competitions, primarily focus on querying individual keyframes or images rather than encoding an entire clip or video segment. However, queries often describe an action or event over a series of frames, not a specific image. This results in i...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Current video retrieval systems, especially those used in competitions,
primarily focus on querying individual keyframes or images rather than encoding
an entire clip or video segment. However, queries often describe an action or
event over a series of frames, not a specific image. This results in
insufficient information when analyzing a single frame, leading to less
accurate query results. Moreover, extracting embeddings solely from images
(keyframes) does not provide enough information for models to encode
higher-level, more abstract insights inferred from the video. These models tend
to only describe the objects present in the frame, lacking a deeper
understanding. In this work, we propose a system that integrates the latest
methodologies, introducing a novel pipeline that extracts multimodal data, and
incorporate information from multiple frames within a video, enabling the model
to abstract higher-level information that captures latent meanings, focusing on
what can be inferred from the video clip, rather than just focusing on object
detection in one single image. |
---|---|
DOI: | 10.48550/arxiv.2412.07584 |