Mask region-based convolutional neural network and VGG-16 inspired brain tumor segmentation

The process of brain tumour segmentation entails locating the tumour precisely in images. Magnetic Resonance Imaging (MRI) is typically used by doctors to find any brain tumours or tissue abnormalities. With the use of region-based Convolutional Neural Network (R-CNN) masks, Grad-CAM and transfer le...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Scientific reports 2024-07, Vol.14 (1), p.17615-22, Article 17615
Hauptverfasser: Basha, Niha Kamal, Ananth, Christo, Muthukumaran, K., Sudhamsu, Gadug, Mittal, Vikas, Gared, Fikreselam
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The process of brain tumour segmentation entails locating the tumour precisely in images. Magnetic Resonance Imaging (MRI) is typically used by doctors to find any brain tumours or tissue abnormalities. With the use of region-based Convolutional Neural Network (R-CNN) masks, Grad-CAM and transfer learning, this work offers an effective method for the detection of brain tumours. Helping doctors make extremely accurate diagnoses is the goal. A transfer learning-based model has been suggested that offers high sensitivity and accuracy scores for brain tumour detection when segmentation is done using R-CNN masks. To train the model, the Inception V3, VGG-16, and ResNet-50 architectures were utilised. The Brain MRI Images for Brain Tumour Detection dataset was utilised to develop this method. This work's performance is evaluated and reported in terms of recall, specificity, sensitivity, accuracy, precision, and F1 score. A thorough analysis has been done comparing the proposed model operating with three distinct architectures: VGG-16, Inception V3, and Resnet-50. Comparing the proposed model, which was influenced by the VGG-16, to related works also revealed its performance. Achieving high sensitivity and accuracy percentages was the main goal. Using this approach, an accuracy and sensitivity of around 99% were obtained, which was much greater than current efforts.
ISSN:2045-2322
2045-2322
DOI:10.1038/s41598-024-66554-4