A Privacy-Aware and Incremental Defense Method Against GAN-Based Poisoning Attack

Federated learning is usually utilized as a fraud detection framework in the domain of financial risk management, which promotes the model accuracy without training data exchange. One of the challenges in federated learning is the GAN-based poisoning attack. The GAN-based poisoning attack is a type...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on computational social systems 2024-04, Vol.11 (2), p.1-14
Hauptverfasser: Qiao, Feifei, Li, Zhong, Kong, Yubo
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Federated learning is usually utilized as a fraud detection framework in the domain of financial risk management, which promotes the model accuracy without training data exchange. One of the challenges in federated learning is the GAN-based poisoning attack. The GAN-based poisoning attack is a type of intractable poisoning attack that causes global model accuracy degradation and privacy leak. Most of the existing defenses for GAN-based poisoning attack have the three problems: 1) dependence on validation datasets; 2) incompetence of dealing with incremental poisoning attack; and 3) privacy leak. To address the above problems, we present a privacy-aware and incremental defense (PID) method to detect malicious participants and protect privacy. In PID, we design a method to accumulate the offset of model parameters from participants in all current epochs to represent the moving tendency for model parameters. Thus, we can distinguish the adversaries from normal participants based on the accumulations in this incremental poisoning attack. We also use multiple trust domains to reduce the rate of misjudging benign participants as adversaries. Moreover, a differentiated differential privacy is utilized before the global model sending to protect the privacy of participants' training datasets in PID. The experiments conducted on two real-world datasets under financial fraud detection scenario demonstrate that the PID reduces the fallout of adversaries detection (the rate of misjudging benign participants as adversaries) by at least 51.1% and improve the speed of detecting all malicious participants by at least 33.4% compared with two popular defense methods. Besides, the privacy preserving of PID is also effective.
ISSN:2329-924X
2373-7476
DOI:10.1109/TCSS.2023.3263241