Attention decoupled contrastive learning for semi-supervised segmentation method based on data augmentation

Deep learning algorithms have demonstrated impressive performance by leveraging large labeled data. However, acquiring pixel-level annotations for medical image analysis, especially in segmentation tasks, is both costly and time-consuming, posing challenges for supervised learning techniques. Existi...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Physics in medicine & biology 2024-06, Vol.69 (12)
Hauptverfasser: Pan, Pan, Chen, Houjin, Li, Yanfeng, Peng, Wanru, Cheng, Lin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Deep learning algorithms have demonstrated impressive performance by leveraging large labeled data. However, acquiring pixel-level annotations for medical image analysis, especially in segmentation tasks, is both costly and time-consuming, posing challenges for supervised learning techniques. Existing semi-supervised methods tend to underutilize representations of unlabeled data and handle labeled and unlabeled data separately, neglecting their interdependencies. To address this issue, we introduce the Data-Augmented Attention-Decoupled Contrastive model (DADC). This model incorporates an attention decoupling module and utilizes contrastive learning to effectively distinguish foreground and background, significantly improving segmentation accuracy. Our approach integrates an augmentation technique that merges information from both labeled and unlabeled data, notably boosting network performance, especially in scenarios with limited labeled data. We conducted comprehensive experiments on the automated breast ultrasound (ABUS) dataset and the results demonstrate that DADC outperforms existing segmentation methods in terms of segmentation performance.
ISSN:0031-9155
1361-6560
DOI:10.1088/1361-6560/ad4d4f