Region Attention Transformer for Medical Image Restoration
Transformer-based methods have demonstrated impressive results in medical image restoration, attributed to the multi-head self-attention (MSA) mechanism in the spatial dimension. However, the majority of existing Transformers conduct attention within fixed and coarsely partitioned regions (\text{e.g...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Transformer-based methods have demonstrated impressive results in medical
image restoration, attributed to the multi-head self-attention (MSA) mechanism
in the spatial dimension. However, the majority of existing Transformers
conduct attention within fixed and coarsely partitioned regions (\text{e.g.}
the entire image or fixed patches), resulting in interference from irrelevant
regions and fragmentation of continuous image content. To overcome these
challenges, we introduce a novel Region Attention Transformer (RAT) that
utilizes a region-based multi-head self-attention mechanism (R-MSA). The R-MSA
dynamically partitions the input image into non-overlapping semantic regions
using the robust Segment Anything Model (SAM) and then performs self-attention
within these regions. This region partitioning is more flexible and
interpretable, ensuring that only pixels from similar semantic regions
complement each other, thereby eliminating interference from irrelevant
regions. Moreover, we introduce a focal region loss to guide our model to
adaptively focus on recovering high-difficulty regions. Extensive experiments
demonstrate the effectiveness of RAT in various medical image restoration
tasks, including PET image synthesis, CT image denoising, and pathological
image super-resolution. Code is available at
\href{https://github.com/Yaziwel/Region-Attention-Transformer-for-Medical-Image-Restoration.git}{https://github.com/RAT}. |
---|---|
DOI: | 10.48550/arxiv.2407.09268 |