VRFP: On-the-Fly Video Retrieval Using Web Images and Fast Fisher Vector Products

On-the-fly video retrieval using Web images and fast Fisher Vector products (VRFP) is a real-time video retrieval framework based on short text input queries, which obtains weakly labeled training images from the Web after the query is known. The retrieved Web images representing the query and each...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on multimedia 2017-07, Vol.19 (7), p.1583-1595
Hauptverfasser: Han, Xintong, Singh, Bharat, Morariu, Vlad I., Davis, Larry S.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:On-the-fly video retrieval using Web images and fast Fisher Vector products (VRFP) is a real-time video retrieval framework based on short text input queries, which obtains weakly labeled training images from the Web after the query is known. The retrieved Web images representing the query and each database video are treated as unordered collections of images, and each collection is represented using a single Fisher Vector built on CNN features. Our experiments show that a Fisher Vector is robust to noise present in Web images and compares favorably in terms of accuracy to other standard representations. While a Fisher Vector can be constructed efficiently for a new query, matching against the test set is slow due to its high dimensionality. To perform matching in real time, we present a lossless algorithm that accelerates the inner product computation between high-dimensional Fisher Vectors. We prove that the expected number of multiplications required decreases quadratically with the sparsity of Fisher Vectors. We are not only able to construct and apply query models in real time, but with the help of a simple reranking scheme, we also outperform state-of-the-art automatic retrieval methods by a significant margin on TRECVID MED13 (3.5%), MED14 (1.3%), and CCV datasets (5.2%). We also provide a direct comparison on standard datasets between two different paradigms for automatic video retrieval: zero-shot learning and on-the-fly retrieval.
ISSN:1520-9210
1941-0077
DOI:10.1109/TMM.2017.2671414