Modality-Invariant Asymmetric Networks for Cross-Modal Hashing

Cross-modal hashing has garnered considerable attention and gained great success in many cross-media similarity search applications due to its prominent computational efficiency and low storage overhead. However, it still remains challenging how to effectively take multilevel advantages of semantics...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on knowledge and data engineering 2023-05, Vol.35 (5), p.5091-5104
Hauptverfasser: Zhang, Zheng, Luo, Haoyang, Zhu, Lei, Lu, Guangming, Shen, Heng Tao
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Cross-modal hashing has garnered considerable attention and gained great success in many cross-media similarity search applications due to its prominent computational efficiency and low storage overhead. However, it still remains challenging how to effectively take multilevel advantages of semantics on the entire database to jointly bridge the semantic and heterogeneity gaps across different modalities. In this paper, we propose a novel Modality-Invariant Asymmetric Networks (MIAN) architecture, which explores the asymmetric intra- and inter-modal similarity preservation under a probabilistic modality alignment framework. Specifically, an intra-modal asymmetric network is conceived to capture the query-vs-all internal pairwise similarities for each modality in a probabilistic asymmetric learning manner. Moreover, an inter-modal asymmetric network is deployed to fully harness the cross-modal semantic similarities supported by the maximum inner product search formula between two distinct hash embeddings. Particularly, the pairwise, piecewise and transformed semantics are jointly considered into one unified semantic-preserving hash codes learning scheme. Furthermore, we construct a modality alignment network to distill the redundancy-free visual features and maximize the conditional bottleneck information between different modalities. Such a network could close the heterogeneity and domain shift across different modalities and enable it to yield discriminative modality-invariant hash codes. Extensive experiments evidence that our MIAN approach can outperform the state-of-the-art cross-modal hashing methods.
ISSN:1041-4347
1558-2191
DOI:10.1109/TKDE.2022.3144352