Securing Multi-Source Domain Adaptation With Global and Domain-Wise Privacy Demands

Making available a large size of training data for deep learning models and preserving data privacy are two ever-growing concerns in the machine learning community. Multi-source domain adaptation (MDA) leverages the data information from different domains and aggregates them to improve the performan...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on knowledge and data engineering 2024-12, Vol.36 (12), p.9235-9248
Hauptverfasser: Chai, Shuwen, Xiao, Yutang, Liu, Feng, Zhu, Jian, Zhou, Yuan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Making available a large size of training data for deep learning models and preserving data privacy are two ever-growing concerns in the machine learning community. Multi-source domain adaptation (MDA) leverages the data information from different domains and aggregates them to improve the performance in the target task, while the privacy leakage risk of publishing models under malicious attacker for membership or attribute inference is even more complicated than the one faced by single-source domain adaptation. In this paper, we tackle the problem of effectively protecting data privacy while training and aggregating multi-source information, where each source domain enjoys an independent privacy budget. Specifically, we develop a differentially private MDA (DPMDA) algorithm to provide domain-wise privacy protection with adaptive weighting scheme based on task similarity and task-specific privacy budget. We evaluate our algorithm on three benchmark tasks and show that DPMDA can effectively leverage different private budgets from source domains and consistently outperforms the existing private baselines with a reasonable gap with non-private state-of-the-art.
ISSN:1041-4347
1558-2191
DOI:10.1109/TKDE.2024.3459890