FedBrain-Distill: Communication-Efficient Federated Brain Tumor Classification Using Ensemble Knowledge Distillation on Non-IID Data
Brain is one the most complex organs in the human body. Due to its complexity, classification of brain tumors still poses a significant challenge, making brain tumors a particularly serious medical issue. Techniques such as Machine Learning (ML) coupled with Magnetic Resonance Imaging (MRI) have pav...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Brain is one the most complex organs in the human body. Due to its
complexity, classification of brain tumors still poses a significant challenge,
making brain tumors a particularly serious medical issue. Techniques such as
Machine Learning (ML) coupled with Magnetic Resonance Imaging (MRI) have paved
the way for doctors and medical institutions to classify different types of
tumors. However, these techniques suffer from limitations that violate patients
privacy. Federated Learning (FL) has recently been introduced to solve such an
issue, but the FL itself suffers from limitations like communication costs and
dependencies on model architecture, forcing all models to have identical
architectures. In this paper, we propose FedBrain-Distill, an approach that
leverages Knowledge Distillation (KD) in an FL setting that maintains the users
privacy and ensures the independence of FL clients in terms of model
architecture. FedBrain-Distill uses an ensemble of teachers that distill their
knowledge to a simple student model. The evaluation of FedBrain-Distill
demonstrated high-accuracy results for both Independent and Identically
Distributed (IID) and non-IID data with substantial low communication costs on
the real-world Figshare brain tumor dataset. It is worth mentioning that we
used Dirichlet distribution to partition the data into IID and non-IID data.
All the implementation details are accessible through our Github repository. |
---|---|
DOI: | 10.48550/arxiv.2409.05359 |