Semantic preserving asymmetric discrete hashing for cross-modal retrieval
In recent years, hashing technologies have garnered substantial attention and achieved notable results due to their low storage costs and excellent retrieval efficiency. However, the majority of existing approaches build a massive pairwise similarity matrix to maintain the similarity relationship in...
Gespeichert in:
Veröffentlicht in: | Applied intelligence (Dordrecht, Netherlands) Netherlands), 2023-06, Vol.53 (12), p.15352-15371 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In recent years, hashing technologies have garnered substantial attention and achieved notable results due to their low storage costs and excellent retrieval efficiency. However, the majority of existing approaches build a massive pairwise similarity matrix to maintain the similarity relationship in the original space, which can easily produce huge time-space overhead and lose the class information, making these approaches unscalable to large-scale multimedia datasets. Additionally, the majority of cross-modal techniques concurrently learn the hash function and binary representations, which makes optimization more difficult. To tackle these issues, we developed a hashing approach called Semantic preserving Asymmetric discrete Hashing for cross-modal retrieval (SEAH), which aims to preserve the similarity metric based on the global semantic information and the local similarity structure. Specifically, SEAH adopts an asymmetric learning scheme and embeds class attribute information to boost the discriminating strength of the learned binary codes. Then, SEAH employs a well-designed optimization algorithm to achieve efficient iterative optimization, thus avoiding the quantization error problem. In addition, the proposed SEAH is a two-stage approach; two algorithms, SEAH-t and SEAH-s, are developed in the second stage. The first one adopts linear classifiers as hash functions, while the second is a semantic-enhanced strategy utilizing distance-distance difference minimization to improve the ability of the to-be-learnedhash functions. Extensive experiments on three frequently used benchmark datasets highlight that the proposed SEAH-t and SEAH-s are not only superior to several state-of-the-art approaches but also retain their query and storage efficiency. |
---|---|
ISSN: | 0924-669X 1573-7497 |
DOI: | 10.1007/s10489-022-04282-w |