Privacy preservation for federated learning in health care

Artificial intelligence (AI) shows potential to improve health care by leveraging data to build models that can inform clinical workflows. However, access to large quantities of diverse data is needed to develop robust generalizable models. Data sharing across institutions is not always feasible due...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Patterns (New York, N.Y.) N.Y.), 2024-07, Vol.5 (7), p.100974, Article 100974
Hauptverfasser: Pati, Sarthak, Kumar, Sourav, Varma, Amokh, Edwards, Brandon, Lu, Charles, Qu, Liangqiong, Wang, Justin J., Lakshminarayanan, Anantharaman, Wang, Shih-han, Sheller, Micah J., Chang, Ken, Singh, Praveer, Rubin, Daniel L., Kalpathy-Cramer, Jayashree, Bakas, Spyridon
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Artificial intelligence (AI) shows potential to improve health care by leveraging data to build models that can inform clinical workflows. However, access to large quantities of diverse data is needed to develop robust generalizable models. Data sharing across institutions is not always feasible due to legal, security, and privacy concerns. Federated learning (FL) allows for multi-institutional training of AI models, obviating data sharing, albeit with different security and privacy concerns. Specifically, insights exchanged during FL can leak information about institutional data. In addition, FL can introduce issues when there is limited trust among the entities performing the compute. With the growing adoption of FL in health care, it is imperative to elucidate the potential risks. We thus summarize privacy-preserving FL literature in this work with special regard to health care. We draw attention to threats and review mitigation approaches. We anticipate this review to become a health-care researcher’s guide to security and privacy in FL. [Display omitted] Significant improvements can be made to clinical AI applications when multiple health-care institutions collaborate to build models that leverage large and diverse datasets. Federated learning (FL) provides a method for such AI model training, where each institution shares only model updates derived from their private training data, rather than the explicit patient data. This has been demonstrated to advance the state of the art for many clinical AI applications. However, open and persistent federations bring up the question of trust, and model updates have raised considerations of possible information leakage. Prior work has gone into understanding the inherent privacy risks and into developing mitigation techniques. Focusing on FL in health care, we review the privacy risks and the costs and limitations associated with state-of-the-art mitigations. We hope to provide a guide to health-care researchers seeking to engage in FL as a new paradigm of secure and private collaborative AI. AI can enhance health care by using data to create useful models. However, sharing data between institutions is challenging due to legal and privacy issues. Federated learning (FL) allows institutions to train AI models without sharing data, but it also has its own security concerns. As FL becomes more commonplace in health care, it is crucial to understand its risks. This work reviews the literature on privacy-preserving
ISSN:2666-3899
2666-3899
DOI:10.1016/j.patter.2024.100974