Protecting machine learning models from privacy attacks
This disclosure describes methods and systems for protecting machine learning models against privacy attacks. A machine learning model may be trained using a set of training data and causal relationship data. The causal relationship data may describe a subset of features in the training data that ha...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Patent |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This disclosure describes methods and systems for protecting machine learning models against privacy attacks. A machine learning model may be trained using a set of training data and causal relationship data. The causal relationship data may describe a subset of features in the training data that have a causal relationship with the outcome. The machine learning model may learn a function that predicts an outcome based on the training data and the causal relationship data. A predefined privacy guarantee value may be received. An amount of noise may be added to the machine learning model to make a privacy guarantee value of the machine learning model equivalent to or stronger than the predefined privacy guarantee value. The amount of noise may be added at a parameter level of the machine learning model. |
---|