Deep Class-Wise Hashing: Semantics-Preserving Hashing via Class-Wise Loss
Deep supervised hashing has emerged as an effective solution to large-scale semantic image retrieval problems in computer vision. Convolutional neural network-based hashing methods typically seek pairwise or triplet labels to conduct similarity-preserving learning. However, complex semantic concepts...
Gespeichert in:
Veröffentlicht in: | IEEE transaction on neural networks and learning systems 2020-05, Vol.31 (5), p.1681-1695 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Deep supervised hashing has emerged as an effective solution to large-scale semantic image retrieval problems in computer vision. Convolutional neural network-based hashing methods typically seek pairwise or triplet labels to conduct similarity-preserving learning. However, complex semantic concepts of visual contents are hard to capture by similar/dissimilar labels, which limits the retrieval performance. Generally, pairwise or triplet losses not only suffer from expensive training costs but also lack sufficient semantic information. In this paper, we propose a novel deep supervised hashing model to learn more compact class-level similarity-preserving binary codes. Our model is motivated by deep metric learning that directly takes semantic labels as supervised information in training and generates corresponding discriminant hashing code. Specifically, a novel cubic constraint loss function based on Gaussian distribution is proposed, which preserves semantic variations while penalizes the overlapping part of different classes in the embedding space. To address the discrete optimization problem introduced by binary codes, a two-step optimization strategy is proposed to provide efficient training and avoid the problem of gradient vanishing. Extensive experiments on five large-scale benchmark databases show that our model can achieve the state-of-the-art retrieval performance. |
---|---|
ISSN: | 2162-237X 2162-2388 |
DOI: | 10.1109/TNNLS.2019.2921805 |