PRIVATE MODEL UTILITY BY MINIMIZING EXPECTED LOSS UNDER NOISE
Training of a model is performed to minimize expected loss under noise (ELUN) while maintaining differential privacy. Noise is added to weights of a machine learning model as random samples drawn from a noise distribution, the noise being added in accordance with a privacy budget. The ELUN is minimi...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Patent |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Training of a model is performed to minimize expected loss under noise (ELUN) while maintaining differential privacy. Noise is added to weights of a machine learning model as random samples drawn from a noise distribution, the noise being added in accordance with a privacy budget. The ELUN is minimized by using a loss function that anticipates noise added to the weights of the machine learning model, to find a point in the parameter space for which loss is robust to the noise in the weights. The addition of noise and the minimization of the ELUN are iterated until the weights converge and optimization constraints are satisfied. The model is utilized on arbitrary inputs while protecting the privacy of training data used to train the model. |
---|