Reducing annotation burden in MR: A novel MR‐contrast guided contrastive learning approach for image segmentation

Background Contrastive learning, a successful form of representational learning, has shown promising results in pretraining deep learning (DL) models for downstream tasks. When working with limited annotation data, as in medical image segmentation tasks, learning domain‐specific local representation...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Medical physics (Lancaster) 2024-04, Vol.51 (4), p.2707-2720
Hauptverfasser: Umapathy, Lavanya, Brown, Taylor, Mushtaq, Raza, Greenhill, Mark, Lu, J'rick, Martin, Diego, Altbach, Maria, Bilgin, Ali
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Background Contrastive learning, a successful form of representational learning, has shown promising results in pretraining deep learning (DL) models for downstream tasks. When working with limited annotation data, as in medical image segmentation tasks, learning domain‐specific local representations can further improve the performance of DL models. Purpose In this work, we extend the contrastive learning framework to utilize domain‐specific contrast information from unlabeled Magnetic Resonance (MR) images to improve the performance of downstream MR image segmentation tasks in the presence of limited labeled data. Methods The contrast in MR images is controlled by underlying tissue properties (e.g., T1 or T2) and image acquisition parameters. We hypothesize that learning to discriminate local representations based on underlying tissue properties should improve subsequent segmentation tasks on MR images. We propose a novel constrained contrastive learning (CCL) strategy that uses tissue‐specific information via a constraint map to define positive and negative local neighborhoods for contrastive learning, embedding this information in the representational space during pretraining. For a given MR contrast image, the proposed strategy uses local signal characteristics (constraint map) across a set of related multi‐contrast MR images as a surrogate for underlying tissue information. We demonstrate the utility of the approach for downstream: (1) multi‐organ segmentation tasks in T2‐weighted images where a DL model learns T2 information with constraint maps from a set of 2D multi‐echo T2‐weighted images (n = 101) and (2) tumor segmentation tasks in multi‐parametric images from the public brain tumor segmentation (BraTS) (n = 80) dataset where DL models learn T1 and T2 information from multi‐parametric BraTS images. Performance is evaluated on downstream multi‐label segmentation tasks with limited data in (1) T2‐weighted images of the abdomen from an in‐house Radial‐T2 (Train/Test = 30/20), (2) public Cartesian‐T2 (Train/Test = 6/12) dataset, and (3) multi‐parametric MR images from the public brain tumor segmentation dataset (BraTS) (Train/Test = 40/50). The performance of the proposed CCL strategy is compared to state‐of‐the‐art self‐supervised contrastive learning techniques. In each task, a model is also trained using all available labeled data for supervised baseline performance. Results The proposed CCL strategy consistently yielded improved Dice scores, Pre
ISSN:0094-2405
2473-4209
DOI:10.1002/mp.16820