Estimating 2-Sinkhorn Divergence between Gaussian Processes from Finite-Dimensional Marginals
\emph{Optimal Transport} (OT) has emerged as an important computational tool in machine learning and computer vision, providing a geometrical framework for studying probability measures. OT unfortunately suffers from the curse of dimensionality and requires regularization for practical computations,...
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | \emph{Optimal Transport} (OT) has emerged as an important computational tool
in machine learning and computer vision, providing a geometrical framework for
studying probability measures. OT unfortunately suffers from the curse of
dimensionality and requires regularization for practical computations, of which
the \emph{entropic regularization} is a popular choice, which can be
'unbiased', resulting in a \emph{Sinkhorn divergence}. In this work, we study
the convergence of estimating the 2-Sinkhorn divergence between \emph{Gaussian
processes} (GPs) using their finite-dimensional marginal distributions. We show
almost sure convergence of the divergence when the marginals are sampled
according to some base measure. Furthermore, we show that using $n$ marginals
the estimation error of the divergence scales in a dimension-free way as
$\mathcal{O}\left(\epsilon^ {-1}n^{-\frac{1}{2}}\right)$, where $\epsilon$ is
the magnitude of entropic regularization. |
---|---|
DOI: | 10.48550/arxiv.2102.03267 |