Analyzing the vulnerabilities in SplitFed Learning: Assessing the robustness against Data Poisoning Attacks
Distributed Collaborative Machine Learning (DCML) is a potential alternative to address the privacy concerns associated with centralized machine learning. The Split learning (SL) and Federated Learning (FL) are the two effective learning approaches in DCML. Recently there have been an increased inte...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Distributed Collaborative Machine Learning (DCML) is a potential alternative
to address the privacy concerns associated with centralized machine learning.
The Split learning (SL) and Federated Learning (FL) are the two effective
learning approaches in DCML. Recently there have been an increased interest on
the hybrid of FL and SL known as the SplitFed Learning (SFL). This research is
the earliest attempt to study, analyze and present the impact of data poisoning
attacks in SFL. We propose three kinds of novel attack strategies namely
untargeted, targeted and distance-based attacks for SFL. All the attacks
strategies aim to degrade the performance of the DCML-based classifier. We test
the proposed attack strategies for two different case studies on
Electrocardiogram signal classification and automatic handwritten digit
recognition. A series of attack experiments were conducted by varying the
percentage of malicious clients and the choice of the model split layer between
the clients and the server. The results after the comprehensive analysis of
attack strategies clearly convey that untargeted and distance-based poisoning
attacks have greater impacts in evading the classifier outcomes compared to
targeted attacks in SFL |
---|---|
DOI: | 10.48550/arxiv.2307.03197 |