P–166 Machine learning for automated cell segmentation in embryos

Abstract Study question Is it possible to automate the process of detecting individual blastomeres within a 4-cell embryo? Summary answer Deep learning models are capable of identifying individual cells in single focal plane images of 4-cell embryos. What is known already As individual blastomeres w...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Human reproduction (Oxford) 2021-08, Vol.36 (Supplement_1)
Hauptverfasser: He, C, Hariharan, R, Chambost, J, Jacques, C, Azambuja, R, Badalotti, M, Macedo, F, Ebner, T, Zaninovic, N, Malmsten, J, Va. d. Velde, H, Wouters, K, Fréour, T, Wiemer, K, Hickman, C
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Abstract Study question Is it possible to automate the process of detecting individual blastomeres within a 4-cell embryo? Summary answer Deep learning models are capable of identifying individual cells in single focal plane images of 4-cell embryos. What is known already As individual blastomeres within a 4-cell embryo maintain totipotency, their intercellular junctions are critical in maintaining and directing communication. These junctions are determined by the zygote’s cleavage patterns, and can affect the overall embryo ‘shape’, which can be described as either ‘tetrahedral’ or ‘planar’. Planar embryos carry significantly worse outcomes both in the short and long term, such as more compromised blastulation, clinical pregnancy and live birth rates. Therefore, more accurate identification of cell borders at the 4-cell stage may contribute to improved classification of cell shape and embryo visualisation. Study design, size, duration This was a retrospective cohort analysis of 222 single focal plane images from 3 clinics. Each image captured an embryo at the 4-cell stage and was taken using the Embryoscope™ time-lapse incubator at the central focal plane. Images from two of the clinics were split into training (n = 161) and validation (n = 17) sets. Images from the third clinic formed a blind testing set (n = 44). Participants/materials, setting, methods Ground truth masks were manually created by two human operators using the VGG Image Annotator software. A MaskRCNN neural network model with a pre-trained ResNet–50 backbone was trained to segment individual blastomeres from training images. Data augmentation (flips, rotations, Gaussian noise, cropping, brightness changes and optical distortion) were used during the training process. The model’s performance was evaluated using the IoU metric (a measure of overlap between model-predicted and human-annotated masks). Main results and the role of chance The model was evaluated on a blind test set of 44 images. The model had a mean IoU of 0.92 for individual cells (Standard Deviation (SD) = 0.05) with precision and sensitivity of 0.95 and 0.97 respectively. The mean IoU for the entire embryo (in relation to all 4 blastomeres combined) was 0.92 (SD 0.02). Furthermore, the model was able to count the number of cells in the images with 70% accuracy, and deviating by no more than 1 cell in each error. The nature of these errors can be broken down into the detection of fragmentation as a cell (2 cases); the detect
ISSN:0268-1161
1460-2350
DOI:10.1093/humrep/deab130.165