Federated Learning with Diffusion Models for Privacy-Sensitive Vision Tasks
Diffusion models have shown great potential for vision-related tasks, particularly for image generation. However, their training is typically conducted in a centralized manner, relying on data collected from publicly available sources. This approach may not be feasible or practical in many domains,...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Diffusion models have shown great potential for vision-related tasks,
particularly for image generation. However, their training is typically
conducted in a centralized manner, relying on data collected from publicly
available sources. This approach may not be feasible or practical in many
domains, such as the medical field, which involves privacy concerns over data
collection. Despite the challenges associated with privacy-sensitive data, such
domains could still benefit from valuable vision services provided by diffusion
models. Federated learning (FL) plays a crucial role in enabling decentralized
model training without compromising data privacy. Instead of collecting data,
an FL system gathers model parameters, effectively safeguarding the private
data of different parties involved. This makes FL systems vital for managing
decentralized learning tasks, especially in scenarios where privacy-sensitive
data is distributed across a network of clients. Nonetheless, FL presents its
own set of challenges due to its distributed nature and privacy-preserving
properties. Therefore, in this study, we explore the FL strategy to train
diffusion models, paving the way for the development of federated diffusion
models. We conduct experiments on various FL scenarios, and our findings
demonstrate that federated diffusion models have great potential to deliver
vision services to privacy-sensitive domains. |
---|---|
DOI: | 10.48550/arxiv.2311.16538 |