Supervised Metric Learning to Rank for Retrieval via Contextual Similarity Optimization
There is extensive interest in metric learning methods for image retrieval. Many metric learning loss functions focus on learning a correct ranking of training samples, but strongly overfit semantically inconsistent labels and require a large amount of data. To address these shortcomings, we propose...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | There is extensive interest in metric learning methods for image retrieval.
Many metric learning loss functions focus on learning a correct ranking of
training samples, but strongly overfit semantically inconsistent labels and
require a large amount of data. To address these shortcomings, we propose a new
metric learning method, called contextual loss, which optimizes contextual
similarity in addition to cosine similarity. Our contextual loss implicitly
enforces semantic consistency among neighbors while converging to the correct
ranking. We empirically show that the proposed loss is more robust to label
noise, and is less prone to overfitting even when a large portion of train data
is withheld. Extensive experiments demonstrate that our method achieves a new
state-of-the-art across four image retrieval benchmarks and multiple different
evaluation settings. Code is available at:
https://github.com/Chris210634/metric-learning-using-contextual-similarity |
---|---|
DOI: | 10.48550/arxiv.2210.01908 |