Adversarial Binary Mutual Learning for Semi-Supervised Deep Hashing

Hashing is a popular search algorithm for its compact binary representation and efficient Hamming distance calculation. Benefited from the advance of deep learning, deep hashing methods have achieved promising performance. However, those methods usually learn with expensive labeled data but fail to...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transaction on neural networks and learning systems 2022-08, Vol.33 (8), p.4110-4124
Hauptverfasser: Wang, Guan'An, Hu, Qinghao, Yang, Yang, Cheng, Jian, Hou, Zeng-Guang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Hashing is a popular search algorithm for its compact binary representation and efficient Hamming distance calculation. Benefited from the advance of deep learning, deep hashing methods have achieved promising performance. However, those methods usually learn with expensive labeled data but fail to utilize unlabeled data. Furthermore, the traditional pairwise loss used by those methods cannot explicitly force similar/dissimilar pairs to small/large distances. Both weaknesses limit existing methods' performance. To solve the first problem, we propose a novel semi-supervised deep hashing model named adversarial binary mutual learning (ABML). Specifically, our ABML consists of a generative model G_{H} and a discriminative model D_{H} , where D_{H} learns labeled data in a supervised way and G_{H} learns unlabeled data by synthesizing real images. We adopt an adversarial learning (AL) strategy to transfer the knowledge of unlabeled data to D_{H} by making G_{H} and D_{H} mutually learn from each other. To solve the second problem, we propose a novel Weibull cross-entropy loss (WCE) by using the Weibull distribution, which can distinguish tiny differences of distances and explicitly force similar/dissimilar distances as small/large as possible. Thus, the learned features are more discriminative. Finally, by incorporating ABML with WCE loss, our model can acquire more semantic and discriminative features. Extensive experiments on four common data sets (CIFAR-10, large database of handwritten digits (MNIST), ImageNet-10, and NUS-WIDE) and a large-scale data set ImageNet demonstrate that our approach successfully overcomes the two difficulties above and significantly outperforms state-of-the-art hashing methods.
ISSN:2162-237X
2162-2388
DOI:10.1109/TNNLS.2021.3055834