An Automated Deep Learning Model for the Cerebellum Segmentation from Fetal Brain Images

Cerebellum measures taken from routinely obtained ultrasound (US) images have been frequently employed to determine gestational age and identify developing central nervous system’s anatomical abnormalities. Standardized cerebellar assessments from large-scale clinical datasets are required to invest...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:BioMed research international 2022-06, Vol.2022, p.8342767-13
Hauptverfasser: Sreelakshmy, R., Titus, Anita, Sasirekha, N., Logashanmugam, E., Begam, R. Benazir, Ramkumar, G., Raju, Raja
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Cerebellum measures taken from routinely obtained ultrasound (US) images have been frequently employed to determine gestational age and identify developing central nervous system’s anatomical abnormalities. Standardized cerebellar assessments from large-scale clinical datasets are required to investigate correlations between the growing cerebellum and postnatal neurodevelopmental results. These studies could uncover structural abnormalities that could be employed as indicators to forecast neurodevelopmental and growth consequences. To achieve this, higher-throughput, precise, and impartial measures must be used to replace the existing human, semiautomatic, and advanced algorithms, which seem to be time-consuming and inaccurate. In this article, we presented an innovative deep learning (DL) technique for automatic fetal cerebellum segmentation from 2-dimensional (2D) US brain images. We present ReU-Net, a semantic segmentation network tailored to the anatomy of the fetal cerebellum. Moreover, we use U-Net as a foundation models with the incorporation of residual blocks and Wiener filter over the last 2 layers to segregate the cerebellum (c) from the noisy US data. 590 images for training and 150 images for testing were taken; also, we employed a 5-fold cross-assessment method. Our ReU-Net scored 91%, 92%, 25.42, 98%, 92%, and 94% for Dice Score Coefficient (DSC), F1-score, Hausdorff Distance (HD), accuracy, recall, and precision, correspondingly. The suggested method outperforms the other U-Net predicated techniques by a quantitatively significant margin (p 0.001). Our presented approach can be used to allow high bandwidth imaging techniques in medical study fetal US images as well as biometric evaluation on a broader scale in fetal US images.
ISSN:2314-6133
2314-6141
DOI:10.1155/2022/8342767