A Novel Soft Margin Loss Function for Deep Discriminative Embedding Learning
Deep embedding learning aims to learn discriminative feature representations through a deep convolutional neural network model. Commonly, such a model contains a network architecture and a loss function. The architecture is responsible for hierarchical feature extraction, while the loss function sup...
Gespeichert in:
Veröffentlicht in: | IEEE access 2020, Vol.8, p.202785-202794 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Deep embedding learning aims to learn discriminative feature representations through a deep convolutional neural network model. Commonly, such a model contains a network architecture and a loss function. The architecture is responsible for hierarchical feature extraction, while the loss function supervises the training procedure with the purpose of maximizing inter-class separability and intra-class compactness. By considering that loss function is crucial for the feature performance, in this article we propose a new loss function called soft margin loss (SML) based on a classification framework for deep embedding learning. Specifically, we first normalize the learned features and the classification weights to map them into the hypersphere. After that, we construct our loss with the difference between the maximum intra-class distance and minimum inter-class distance. By constraining the distance difference with a soft margin that is inherent in the proposed loss, both the inter-class discrepancy and intra-class compactness of learned features can be effectively improved. Finally, under the joint training with an improved softmax loss, the model can learn features with strong discriminability. Toy experiments on MNIST dataset are conducted to show the effectiveness of the proposed method. Additionally, experiments on re-identification tasks are also provided to demonstrate the superior performance of embedding learning. Specifically, 65.48% / 62.68% mAP on CUHK03 labeled / detected dataset (person re-id) and 74.36% mAP on VeRi-776 dataset (vehicle re-id) are achieved respectively. |
---|---|
ISSN: | 2169-3536 2169-3536 |
DOI: | 10.1109/ACCESS.2020.3036185 |