CCF Based System Framework In Federated Learning Against Data Poisoning Attacks

Nowadays, smart systems attract a lot of attention as several smart applications are growing. Distributed machine learning such as federated learning has an essential role in smart systems including 6G applications. The main issues that face federated learning (F.L.) are security and performance, wh...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of Applied Science and Engineering 2023-01, Vol.26 (7), p.973-981
Hauptverfasser: Ibrahim M. Ahmed, Manar Younis Kashmoola
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Nowadays, smart systems attract a lot of attention as several smart applications are growing. Distributed machine learning such as federated learning has an essential role in smart systems including 6G applications. The main issues that face federated learning (F.L.) are security and performance, which are could be affected by the poisoning attack models. One of the most common poisoning attacks is an impersonation attack, such Sybil attack. This paper proposes a new framework that increases the security of federated learning against Sybil poisoning attacks. The proposed framework which is called FED_CCF, creates a hybrid environment using federate learning with Microsoft CCF (Confidential Consortium Framework). It provides a secure and reliable environment that misleads attackers targeting federated learning. The MNIST dataset is used to investigate the performance of F.L. model with FED_CCF in terms of accuracy. The F.L. model is evaluated by exploiting the MNIST dataset and 30% of malicious devices that use the Sybil attack. The experimental results show that F.L. system implementing FED_CCF outperforms Vanilla F.L. in terms of accuracy, where the former acquired approximately 95.2 % compared to the latter, which only obtains 2.55% employing Sybil poisoning attack.
ISSN:2708-9967
2708-9975
DOI:10.6180/jase.202307_26(7).0008