Truly Generalizable Radiograph Segmentation With Conditional Domain Adaptation
Digitization techniques for biomedical images yield disparate visual patterns in radiological exams. These pattern differences, which can be viewed as a domain-shift problem, may hamper the use of data-driven approaches for inference over these images, such as Deep Neural Networks. Another noticeabl...
Gespeichert in:
Veröffentlicht in: | IEEE access 2020, Vol.8, p.84037-84062 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Digitization techniques for biomedical images yield disparate visual patterns in radiological exams. These pattern differences, which can be viewed as a domain-shift problem, may hamper the use of data-driven approaches for inference over these images, such as Deep Neural Networks. Another noticeable difficulty in this field is the lack of labeled data, even though in many cases there is an abundance of unlabeled data available. Therefore, an important step in improving the generalization capabilities of these methods and mitigate domain-shift effects is to perform unsupervised or semi-supervised adaptation between different domains of biomedical images. In this work, we propose a novel approach for segmentation of biomedical images based on Generative Adversarial Networks. The proposed method, named Conditional Domain Adaptation Generative Adversarial Network (CoDAGAN), merges unsupervised networks with supervised deep semantic segmentation architectures in order to create a semi-supervised method capable of learning from both unlabeled and labeled data, whenever labeling is available. We conducted experiments to compare our method with traditional and state-of-the-art baselines by using several domains, datasets, and segmentation tasks. The proposed method yielded consistently better results than the baselines in scarce labeled data scenarios, achieving Jaccard values greater than 0.9 and good segmentation quality in most tasks. Unsupervised Domain Adaptation results were observed to be close to the Fully Supervised Domain Adaptation used in the traditional procedure of fine-tuning pretrained networks. |
---|---|
ISSN: | 2169-3536 2169-3536 |
DOI: | 10.1109/ACCESS.2020.2991688 |