Diagnosis system for cancer disease using a single setting approach

This paper addresses the diagnosis system of cancer disease using a single setting framework. Most of the radiologists and image specialists are identifying the disease in naked eye. When many conventional systems are used to assess or see a patient’s disorder condition, it rarely detects the diseas...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Multimedia tools and applications 2023-12, Vol.82 (30), p.46241-46267
Hauptverfasser: Bhuyan, Hemanta Kumar, Vijayaraj, A., Ravi, Vinayakumar
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:This paper addresses the diagnosis system of cancer disease using a single setting framework. Most of the radiologists and image specialists are identifying the disease in naked eye. When many conventional systems are used to assess or see a patient’s disorder condition, it rarely detects the disease all at once in certain situations. Patients are facing difficulties, when the condition of disease is increasing. Thus, this paper focusses the condition of patient seeing the disease image and developed a single setting framework using a convolutional neural network (CNN) architecture with the help of deep learning approaches. The framework contains several deep learning strategies which are used to determine the patient’s relevant illness through affected image, such as mass detection using You-Only-Look-Once (YOLO) approach and the crucial aspect of segmentation by full resolution convolutional networks (FrCN). In last the CNN model is considered for classification. This paper is considered to implement our model using breast cancer disease. The different classifiers and cross-validation tests are taken for evaluating validation matrix items. Comparisons of the existing model with the proposed model are made for improving the diagnosis system. For example, the method Inception V3 for accuracy and AUC are 86.77 and 85.89 on MIAS database whereas proposed model got 99.54 and 98.85 on same evaluation items. Our findings show that the proposed diagnostic model outperforms on conventional detection, segmentation, and classification methods. Thus, our diagnosis process worked much better using deep learning and suggested approaches which will help and facilitate the diagnosis of each contaminated region. In each stage of image processing of the infected region, the suggested diagnostics method could support radiologists.
ISSN:1380-7501
1573-7721
DOI:10.1007/s11042-023-15478-8