Neural Bregman Divergences for Distance Learning

Many metric learning tasks, such as triplet learning, nearest neighbor retrieval, and visualization, are treated primarily as embedding tasks where the ultimate metric is some variant of the Euclidean distance (e.g., cosine or Mahalanobis), and the algorithm must learn to embed points into the pre-c...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Lu, Fred, Raff, Edward, Ferraro, Francis
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Lu, Fred
Raff, Edward
Ferraro, Francis
description Many metric learning tasks, such as triplet learning, nearest neighbor retrieval, and visualization, are treated primarily as embedding tasks where the ultimate metric is some variant of the Euclidean distance (e.g., cosine or Mahalanobis), and the algorithm must learn to embed points into the pre-chosen space. The study of non-Euclidean geometries is often not explored, which we believe is due to a lack of tools for learning non-Euclidean measures of distance. Recent work has shown that Bregman divergences can be learned from data, opening a promising approach to learning asymmetric distances. We propose a new approach to learning arbitrary Bergman divergences in a differentiable manner via input convex neural networks and show that it overcomes significant limitations of previous works. We also demonstrate that our method more faithfully learns divergences over a set of both new and previously studied tasks, including asymmetric regression, ranking, and clustering. Our tests further extend to known asymmetric, but non-Bregman tasks, where our method still performs competitively despite misspecification, showing the general utility of our approach for asymmetric learning.
doi_str_mv 10.48550/arxiv.2206.04763
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2206_04763</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2206_04763</sourcerecordid><originalsourceid>FETCH-LOGICAL-a673-7e0d8e5a3abd5270744d88974e0a8255725b59f61179d7eb21602f96d1b793193</originalsourceid><addsrcrecordid>eNotzs1OwkAUhuHZsDDgBbiiN9By5vfMLBFETRrdsG9OmdOmSVvMFIjePYiuvrybL48QTxIK462FFaXv7lIoBa4Ag04_CPjgc6I-e07cDjRm2-7CqeXxwFPWHNOtpxPdKiuZ0tiN7ULMGuonfvzfudjvXvabt7z8fH3frMucHOocGaJnS5rqaBUCGhO9D2gYyCtrUdnahsZJiSEi10o6UE1wUdYYtAx6LpZ_t3dy9ZW6gdJP9Uuv7nR9BcalPII</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Neural Bregman Divergences for Distance Learning</title><source>arXiv.org</source><creator>Lu, Fred ; Raff, Edward ; Ferraro, Francis</creator><creatorcontrib>Lu, Fred ; Raff, Edward ; Ferraro, Francis</creatorcontrib><description>Many metric learning tasks, such as triplet learning, nearest neighbor retrieval, and visualization, are treated primarily as embedding tasks where the ultimate metric is some variant of the Euclidean distance (e.g., cosine or Mahalanobis), and the algorithm must learn to embed points into the pre-chosen space. The study of non-Euclidean geometries is often not explored, which we believe is due to a lack of tools for learning non-Euclidean measures of distance. Recent work has shown that Bregman divergences can be learned from data, opening a promising approach to learning asymmetric distances. We propose a new approach to learning arbitrary Bergman divergences in a differentiable manner via input convex neural networks and show that it overcomes significant limitations of previous works. We also demonstrate that our method more faithfully learns divergences over a set of both new and previously studied tasks, including asymmetric regression, ranking, and clustering. Our tests further extend to known asymmetric, but non-Bregman tasks, where our method still performs competitively despite misspecification, showing the general utility of our approach for asymmetric learning.</description><identifier>DOI: 10.48550/arxiv.2206.04763</identifier><language>eng</language><subject>Computer Science - Learning</subject><creationdate>2022-06</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2206.04763$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2206.04763$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Lu, Fred</creatorcontrib><creatorcontrib>Raff, Edward</creatorcontrib><creatorcontrib>Ferraro, Francis</creatorcontrib><title>Neural Bregman Divergences for Distance Learning</title><description>Many metric learning tasks, such as triplet learning, nearest neighbor retrieval, and visualization, are treated primarily as embedding tasks where the ultimate metric is some variant of the Euclidean distance (e.g., cosine or Mahalanobis), and the algorithm must learn to embed points into the pre-chosen space. The study of non-Euclidean geometries is often not explored, which we believe is due to a lack of tools for learning non-Euclidean measures of distance. Recent work has shown that Bregman divergences can be learned from data, opening a promising approach to learning asymmetric distances. We propose a new approach to learning arbitrary Bergman divergences in a differentiable manner via input convex neural networks and show that it overcomes significant limitations of previous works. We also demonstrate that our method more faithfully learns divergences over a set of both new and previously studied tasks, including asymmetric regression, ranking, and clustering. Our tests further extend to known asymmetric, but non-Bregman tasks, where our method still performs competitively despite misspecification, showing the general utility of our approach for asymmetric learning.</description><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzs1OwkAUhuHZsDDgBbiiN9By5vfMLBFETRrdsG9OmdOmSVvMFIjePYiuvrybL48QTxIK462FFaXv7lIoBa4Ag04_CPjgc6I-e07cDjRm2-7CqeXxwFPWHNOtpxPdKiuZ0tiN7ULMGuonfvzfudjvXvabt7z8fH3frMucHOocGaJnS5rqaBUCGhO9D2gYyCtrUdnahsZJiSEi10o6UE1wUdYYtAx6LpZ_t3dy9ZW6gdJP9Uuv7nR9BcalPII</recordid><startdate>20220609</startdate><enddate>20220609</enddate><creator>Lu, Fred</creator><creator>Raff, Edward</creator><creator>Ferraro, Francis</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220609</creationdate><title>Neural Bregman Divergences for Distance Learning</title><author>Lu, Fred ; Raff, Edward ; Ferraro, Francis</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a673-7e0d8e5a3abd5270744d88974e0a8255725b59f61179d7eb21602f96d1b793193</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Lu, Fred</creatorcontrib><creatorcontrib>Raff, Edward</creatorcontrib><creatorcontrib>Ferraro, Francis</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Lu, Fred</au><au>Raff, Edward</au><au>Ferraro, Francis</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Neural Bregman Divergences for Distance Learning</atitle><date>2022-06-09</date><risdate>2022</risdate><abstract>Many metric learning tasks, such as triplet learning, nearest neighbor retrieval, and visualization, are treated primarily as embedding tasks where the ultimate metric is some variant of the Euclidean distance (e.g., cosine or Mahalanobis), and the algorithm must learn to embed points into the pre-chosen space. The study of non-Euclidean geometries is often not explored, which we believe is due to a lack of tools for learning non-Euclidean measures of distance. Recent work has shown that Bregman divergences can be learned from data, opening a promising approach to learning asymmetric distances. We propose a new approach to learning arbitrary Bergman divergences in a differentiable manner via input convex neural networks and show that it overcomes significant limitations of previous works. We also demonstrate that our method more faithfully learns divergences over a set of both new and previously studied tasks, including asymmetric regression, ranking, and clustering. Our tests further extend to known asymmetric, but non-Bregman tasks, where our method still performs competitively despite misspecification, showing the general utility of our approach for asymmetric learning.</abstract><doi>10.48550/arxiv.2206.04763</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2206.04763
ispartof
issn
language eng
recordid cdi_arxiv_primary_2206_04763
source arXiv.org
subjects Computer Science - Learning
title Neural Bregman Divergences for Distance Learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-25T13%3A41%3A26IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Neural%20Bregman%20Divergences%20for%20Distance%20Learning&rft.au=Lu,%20Fred&rft.date=2022-06-09&rft_id=info:doi/10.48550/arxiv.2206.04763&rft_dat=%3Carxiv_GOX%3E2206_04763%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true