A Comprehensive Survey of Foundation Models in Medicine
Foundation models (FMs) are large-scale deep learning models that are developed using large datasets and self-supervised learning methods. These models serve as a base for different downstream tasks, including healthcare. FMs have been adopted with great success across various domains within healthc...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Foundation models (FMs) are large-scale deep learning models that are
developed using large datasets and self-supervised learning methods. These
models serve as a base for different downstream tasks, including healthcare.
FMs have been adopted with great success across various domains within
healthcare. Existing healthcare-based surveys have not yet included all of
these domains. Therefore, we provide a detailed survey of FMs in healthcare. We
focus on the history, learning strategies, flagship models, applications, and
challenges of FMs. We explore how FMs such as the BERT and GPT families are
reshaping various healthcare domains, including clinical large language models,
medical image analysis, and omics. Furthermore, we provide a detailed taxonomy
of healthcare applications facilitated by FMs, such as clinical NLP, medical
computer vision, graph learning, and other biology-related tasks. Despite the
promising opportunities FMs provide, they also have several associated
challenges, which are explained in detail. We also outline open research issues
and potential lessons learned to provide researchers and practitioners with
insights into the capabilities of FMs in healthcare to advance their deployment
and mitigate associated risks. |
---|---|
DOI: | 10.48550/arxiv.2406.10729 |