Segmentation of gastric cancer from microscopic biopsy images using deep learning approach

In Computer vision for microscopy image analysis applications, object segmentation, classification, and structure localization are essential processes for many biological researchers across the globe. The proposed method is significant as it provides automated cancer diagnosis. The manual interpreta...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Biomedical signal processing and control 2023-09, Vol.86, p.105250, Article 105250
Hauptverfasser: Rasal, Tushar, Veerakumar, T., Subudhi, Badri Narayan, Esakkirajan, S.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In Computer vision for microscopy image analysis applications, object segmentation, classification, and structure localization are essential processes for many biological researchers across the globe. The proposed method is significant as it provides automated cancer diagnosis. The manual interpretation of microscopic biopsy for cancer diagnosis is subjective, time-consuming, and dependent on pathologists’ interpretations, degree of skill. An automated cancer diagnosis is required to address these constraints. In this work, A deep learning-based architecture, namely enhanced EMD convolution neural networks (EECNNs) for object segmentation is proposed for microscopy images. The multiple IMFs give detailed frequency components of an image for feature extraction. This improves the performance significantly as compared to the previous works of feature extraction. The presented network is utilized for the segmentation of nuclei and cells in microscopy. The network uses multi-resolution deconvolution filters to train at several levels of an input image, it links the intervening layers for improved localization and context and provides the outcome. The additional convolution layers, which skip the max-pooling function, allow the network to train for varying input intensities and object dimensions while making it adaptable to noisy data. Here, we have compared the experimental results of the proposed method on widely accessible data sets which demonstrate that the proposed method provides superior results in terms of Accuracy of 0.9445, Specificity of 0.9256, Sensitivity of 0.9245, MCC of 0.9654, and BCR of 0.9250. •A deep learning-based architecture for object segmentation with modified CNNs.•A unified architecture for segmenting nuclei, cells, and glands in histology.•Proposed technique verified on the GasHisSDB dataset with several imaging modalities.•In-depth feature extraction step which significantly improves the segmentation performance.•Detailed results to demonstrate the method’s robustness to high levels of noise
ISSN:1746-8094
1746-8108
DOI:10.1016/j.bspc.2023.105250