Differential Privacy Mechanisms in Neural Tangent Kernel Regression
Training data privacy is a fundamental problem in modern Artificial Intelligence (AI) applications, such as face recognition, recommendation systems, language generation, and many others, as it may contain sensitive user information related to legal issues. To fundamentally understand how privacy me...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Training data privacy is a fundamental problem in modern Artificial
Intelligence (AI) applications, such as face recognition, recommendation
systems, language generation, and many others, as it may contain sensitive user
information related to legal issues. To fundamentally understand how privacy
mechanisms work in AI applications, we study differential privacy (DP) in the
Neural Tangent Kernel (NTK) regression setting, where DP is one of the most
powerful tools for measuring privacy under statistical learning, and NTK is one
of the most popular analysis frameworks for studying the learning mechanisms of
deep neural networks. In our work, we can show provable guarantees for both
differential privacy and test accuracy of our NTK regression. Furthermore, we
conduct experiments on the basic image classification dataset CIFAR10 to
demonstrate that NTK regression can preserve good accuracy under a modest
privacy budget, supporting the validity of our analysis. To our knowledge, this
is the first work to provide a DP guarantee for NTK regression. |
---|---|
DOI: | 10.48550/arxiv.2407.13621 |