A Data-scalable Transformer for Medical Image Segmentation: Architecture, Model Efficiency, and Benchmark
Transformers have demonstrated remarkable performance in natural language processing and computer vision. However, existing vision Transformers struggle to learn from limited medical data and are unable to generalize on diverse medical image tasks. To tackle these challenges, we present MedFormer, a...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Transformers have demonstrated remarkable performance in natural language
processing and computer vision. However, existing vision Transformers struggle
to learn from limited medical data and are unable to generalize on diverse
medical image tasks. To tackle these challenges, we present MedFormer, a
data-scalable Transformer designed for generalizable 3D medical image
segmentation. Our approach incorporates three key elements: a desirable
inductive bias, hierarchical modeling with linear-complexity attention, and
multi-scale feature fusion that integrates spatial and semantic information
globally. MedFormer can learn across tiny- to large-scale data without
pre-training. Comprehensive experiments demonstrate MedFormer's potential as a
versatile segmentation backbone, outperforming CNNs and vision Transformers on
seven public datasets covering multiple modalities (e.g., CT and MRI) and
various medical targets (e.g., healthy organs, diseased tissues, and tumors).
We provide public access to our models and evaluation pipeline, offering solid
baselines and unbiased comparisons to advance a wide range of downstream
clinical applications. |
---|---|
DOI: | 10.48550/arxiv.2203.00131 |