Efficiently Computing Similarities to Private Datasets
Many methods in differentially private model training rely on computing the similarity between a query point (such as public or synthetic data) and private data. We abstract out this common subroutine and study the following fundamental algorithmic problem: Given a similarity function $f$ and a larg...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Many methods in differentially private model training rely on computing the
similarity between a query point (such as public or synthetic data) and private
data. We abstract out this common subroutine and study the following
fundamental algorithmic problem: Given a similarity function $f$ and a large
high-dimensional private dataset $X \subset \mathbb{R}^d$, output a
differentially private (DP) data structure which approximates $\sum_{x \in X}
f(x,y)$ for any query $y$. We consider the cases where $f$ is a kernel
function, such as $f(x,y) = e^{-\|x-y\|_2^2/\sigma^2}$ (also known as DP kernel
density estimation), or a distance function such as $f(x,y) = \|x-y\|_2$, among
others.
Our theoretical results improve upon prior work and give better
privacy-utility trade-offs as well as faster query times for a wide range of
kernels and distance functions. The unifying approach behind our results is
leveraging `low-dimensional structures' present in the specific functions $f$
that we study, using tools such as provable dimensionality reduction,
approximation theory, and one-dimensional decomposition of the functions. Our
algorithms empirically exhibit improved query times and accuracy over prior
state of the art. We also present an application to DP classification. Our
experiments demonstrate that the simple methodology of classifying based on
average similarity is orders of magnitude faster than prior DP-SGD based
approaches for comparable accuracy. |
---|---|
DOI: | 10.48550/arxiv.2403.08917 |