Exploring the Unfairness of DP-SGD Across Settings
The Third AAAI Workshop on Privacy-Preserving Artificial Intelligence (PPAI-22) (2022) End users and regulators require private and fair artificial intelligence models, but previous work suggests these objectives may be at odds. We use the CivilComments to evaluate the impact of applying the {\em de...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The Third AAAI Workshop on Privacy-Preserving Artificial
Intelligence (PPAI-22) (2022) End users and regulators require private and fair artificial intelligence
models, but previous work suggests these objectives may be at odds. We use the
CivilComments to evaluate the impact of applying the {\em de facto} standard
approach to privacy, DP-SGD, across several fairness metrics. We evaluate three
implementations of DP-SGD: for dimensionality reduction (PCA), linear
classification (logistic regression), and robust deep learning (Group-DRO). We
establish a negative, logarithmic correlation between privacy and fairness in
the case of linear classification and robust deep learning. DP-SGD had no
significant impact on fairness for PCA, but upon inspection, also did not seem
to lead to private representations. |
---|---|
DOI: | 10.48550/arxiv.2202.12058 |