VividMed: Vision Language Model with Versatile Visual Grounding for Medicine
Recent advancements in Vision Language Models (VLMs) have demonstrated remarkable promise in generating visually grounded responses. However, their application in the medical domain is hindered by unique challenges. For instance, most VLMs rely on a single method of visual grounding, whereas complex...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recent advancements in Vision Language Models (VLMs) have demonstrated
remarkable promise in generating visually grounded responses. However, their
application in the medical domain is hindered by unique challenges. For
instance, most VLMs rely on a single method of visual grounding, whereas
complex medical tasks demand more versatile approaches. Additionally, while
most VLMs process only 2D images, a large portion of medical images are 3D. The
lack of medical data further compounds these obstacles. To address these
challenges, we present VividMed, a vision language model with versatile visual
grounding for medicine. Our model supports generating both semantic
segmentation masks and instance-level bounding boxes, and accommodates various
imaging modalities, including both 2D and 3D data. We design a three-stage
training procedure and an automatic data synthesis pipeline based on open
datasets and models. Besides visual grounding tasks, VividMed also excels in
other common downstream tasks, including Visual Question Answering (VQA) and
report generation. Ablation studies empirically show that the integration of
visual grounding ability leads to improved performance on these tasks. Our code
is publicly available at https://github.com/function2-llx/MMMM. |
---|---|
DOI: | 10.48550/arxiv.2410.12694 |