Towards Generalist Foundation Model for Radiology by Leveraging Web-scale 2D&3D Medical Data
In this study, we aim to initiate the development of Radiology Foundation Model, termed as RadFM. We consider the construction of foundational models from three perspectives, namely, dataset construction, model design, and thorough evaluation. Our contribution can be concluded as follows: (i), we co...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this study, we aim to initiate the development of Radiology Foundation
Model, termed as RadFM. We consider the construction of foundational models
from three perspectives, namely, dataset construction, model design, and
thorough evaluation. Our contribution can be concluded as follows: (i), we
construct a large-scale Medical Multi-modal Dataset, MedMD, which consists of
16M 2D and 3D medical scans with high-quality text descriptions or reports
across various data formats, modalities, and tasks, covering over 5000 distinct
diseases. To the best of our knowledge, this is the first large-scale,
high-quality, medical visual-language dataset, with both 2D and 3D scans; (ii),
we propose an architecture that enables visually conditioned generative
pre-training, i.e., allowing for integration of text input with 2D or 3D
medical scans, and generate responses for diverse radiologic tasks. The model
was initially pre-trained on MedMD and subsequently fine-tuned on the
domain-specific dataset, which is a radiologic cleaned version of MedMD,
containing 3M radiologic visual-language pairs, termed as RadMD; (iii), we
propose a new evaluation benchmark, RadBench, that comprises five tasks,
including modality recognition, disease diagnosis, visual question answering,
report generation and rationale diagnosis, aiming to comprehensively assess the
capability of foundation models in handling practical clinical problems. We
conduct both automatic and human evaluation on RadBench, in both cases, RadFM
outperforms existing multi-modal foundation models, that are publicaly
accessible, including Openflamingo, MedFlamingo, MedVInT and GPT-4V.
Additionally, we also adapt RadFM for different public benchmarks, surpassing
existing SOTAs on diverse datasets. All codes, data, and model checkpoint will
all be made publicly available to promote further research and development in
the field. |
---|---|
DOI: | 10.48550/arxiv.2308.02463 |