MiDA: Membership inference attacks against domain adaptation

Domain adaption has become an effective solution to train neural networks with insufficient training data. In this paper, we investigate the vulnerability of domain adaption that potentially breaches sensitive information about the training dataset. We propose a new membership inference attack again...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:ISA transactions 2023-10, Vol.141, p.103-112
Hauptverfasser: Zhang, Yuanjie, Zhao, Lingchen, Wang, Qian
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Domain adaption has become an effective solution to train neural networks with insufficient training data. In this paper, we investigate the vulnerability of domain adaption that potentially breaches sensitive information about the training dataset. We propose a new membership inference attack against domain adaption models, to infer the membership information of samples from the target domain. By leveraging the background knowledge about an additional source-domain in domain adaptation tasks, our attack can exploit the similar distributions between the target and source domain data to determine if a specific data sample belongs in the training set with high efficiency and accuracy. In particular, the proposed attack can be deployed in a practical scenario where the attacker cannot obtain any details of the model. We conduct extensive evaluations for object and digit recognition tasks. Experimental results show that our method can achieve the attack against domain adaptation models with a high success rate. •We are the first to analyze the privacy risks of domain adaptation models.•The proposed attack is more effective and easily realizable than existing attacks.•Results show that we can attack domain adaptation models with a high success rate.
ISSN:0019-0578
1879-2022
DOI:10.1016/j.isatra.2023.01.021