Ω-Net (Omega-Net): Fully automatic, multi-view cardiac MR detection, orientation, and segmentation with deep neural networks

•The authors propose Omega-Net: A novel convolutional neural network architecture for the detection, orientation, and segmentation of cardiac MR images.•Three modules comprise the network: a coarse-grained segmentation module, an attention module, and a fine-grained segmentation module.•The network...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Medical image analysis 2018-08, Vol.48, p.95-106
Hauptverfasser: Vigneault, Davis M., Xie, Weidi, Ho, Carolyn Y., Bluemke, David A., Noble, J. Alison
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:•The authors propose Omega-Net: A novel convolutional neural network architecture for the detection, orientation, and segmentation of cardiac MR images.•Three modules comprise the network: a coarse-grained segmentation module, an attention module, and a fine-grained segmentation module.•The network is trained end-to-end from scratch using three-fold crossvalidation in 63 subjects (42 with hypertrophic cardiomyopathy, 21 healthy).•Performance of the Omega-Net is substantively improved compared with UNet alone.•In addition, to be comparable with other works, Omega-Net was retrained from scratch using five-fold cross-validation on the publicly available 2017 MICCAI Automated Cardiac Diagnosis Challenge (ACDC) dataset, achieving state-of-the-art performance in two of three segmentation classes. [Display omitted] Pixelwise segmentation of the left ventricular (LV) myocardium and the four cardiac chambers in 2-D steady state free precession (SSFP) cine sequences is an essential preprocessing step for a wide range of analyses. Variability in contrast, appearance, orientation, and placement of the heart between patients, clinical views, scanners, and protocols makes fully automatic semantic segmentation a notoriously difficult problem. Here, we present Ω-Net (Omega-Net): A novel convolutional neural network (CNN) architecture for simultaneous localization, transformation into a canonical orientation, and semantic segmentation. First, an initial segmentation is performed on the input image; second, the features learned during this initial segmentation are used to predict the parameters needed to transform the input image into a canonical orientation; and third, a final segmentation is performed on the transformed image. In this work, Ω-Nets of varying depths were trained to detect five foreground classes in any of three clinical views (short axis, SA; four-chamber, 4C; two-chamber, 2C), without prior knowledge of the view being segmented. This constitutes a substantially more challenging problem compared with prior work. The architecture was trained using three-fold cross-validation on a cohort of patients with hypertrophic cardiomyopathy (HCM, N=42) and healthy control subjects (N=21). Network performance, as measured by weighted foreground intersection-over-union (IoU), was substantially improved for the best-performing Ω-Net compared with U-Net segmentation without localization or orientation (0.858 vs 0.834). In addition, to be comparable with other works, Ω-Ne
ISSN:1361-8415
1361-8423
DOI:10.1016/j.media.2018.05.008