WALRUS: a similarity retrieval algorithm for image databases

Traditional approaches for content-based image querying typically compute a single signature for each image based on color histograms, texture, wavelet tranforms etc., and return as the query result, images whose signatures are closest to the signature of the query image. Therefore, most traditional...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:SIGMOD record 1999-06, Vol.28 (2), p.395-406
Hauptverfasser: Natsev, Apostol, Rastogi, Rajeev, Shim, Kyuseok
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Traditional approaches for content-based image querying typically compute a single signature for each image based on color histograms, texture, wavelet tranforms etc., and return as the query result, images whose signatures are closest to the signature of the query image. Therefore, most traditional methods break down when images contain similar objects that are scaled differently or at different locations, or only certain regions of the image match. In this paper, we propose WALRUS (WAveLet-based Retrieval of User-specified Scenes), a novel similarity retrieval algorithm that is robust to scaling and translation of objects within an image. WALRUS employs a novel similarity model in which each image is first decomposed into its regions, and the similarity measure between a pair of images is then defined to be the fraction of the area of the two images covered by matching regions from the images. In order to extract regions for an image, WALRUS considers sliding windows of varying sizes and then clusters them based on the proximity of their signatures. An efficient dynamic programming algorithm is used to compute wavelet-based signatures for the sliding windows. Experimental results on real-life data sets corroborate the effectiveness of WALRUS's similarity model that performs similarity matching at a region rather than an image granularity.
ISSN:0163-5808
1943-5835
DOI:10.1145/304181.304217