Neural Bregman Divergences for Distance Learning
Many metric learning tasks, such as triplet learning, nearest neighbor retrieval, and visualization, are treated primarily as embedding tasks where the ultimate metric is some variant of the Euclidean distance (e.g., cosine or Mahalanobis), and the algorithm must learn to embed points into the pre-c...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Many metric learning tasks, such as triplet learning, nearest neighbor
retrieval, and visualization, are treated primarily as embedding tasks where
the ultimate metric is some variant of the Euclidean distance (e.g., cosine or
Mahalanobis), and the algorithm must learn to embed points into the pre-chosen
space. The study of non-Euclidean geometries is often not explored, which we
believe is due to a lack of tools for learning non-Euclidean measures of
distance. Recent work has shown that Bregman divergences can be learned from
data, opening a promising approach to learning asymmetric distances. We propose
a new approach to learning arbitrary Bergman divergences in a differentiable
manner via input convex neural networks and show that it overcomes significant
limitations of previous works. We also demonstrate that our method more
faithfully learns divergences over a set of both new and previously studied
tasks, including asymmetric regression, ranking, and clustering. Our tests
further extend to known asymmetric, but non-Bregman tasks, where our method
still performs competitively despite misspecification, showing the general
utility of our approach for asymmetric learning. |
---|---|
DOI: | 10.48550/arxiv.2206.04763 |