Classification of Gleason Grading in Prostate Cancer Histopathology Images Using Deep Learning Techniques: YOLO, Vision Transformers, and Vision Mamba

Prostate cancer ranks among the leading health issues impacting men, with the Gleason scoring system serving as the primary method for diagnosis and prognosis. This system relies on expert pathologists to evaluate samples of prostate tissue and assign a Gleason grade, a task that requires significan...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-10
Hauptverfasser: Amin Malekmohammadi, Badiezadeh, Ali, Seyed Mostafa Mirhassani, Gifani, Parisa, Vafaeezadeh, Majid
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Prostate cancer ranks among the leading health issues impacting men, with the Gleason scoring system serving as the primary method for diagnosis and prognosis. This system relies on expert pathologists to evaluate samples of prostate tissue and assign a Gleason grade, a task that requires significant time and manual effort. To address this challenge, artificial intelligence (AI) solutions have been explored to automate the grading process. In light of these challenges, this study evaluates and compares the effectiveness of three deep learning methodologies, YOLO, Vision Transformers, and Vision Mamba, in accurately classifying Gleason grades from histopathology images. The goal is to enhance diagnostic precision and efficiency in prostate cancer management. This study utilized two publicly available datasets, Gleason2019 and SICAPv2, to train and test the performance of YOLO, Vision Transformers, and Vision Mamba models. Each model was assessed based on its ability to classify Gleason grades accurately, considering metrics such as false positive rate, false negative rate, precision, and recall. The study also examined the computational efficiency and applicability of each method in a clinical setting. Vision Mamba demonstrated superior performance across all metrics, achieving high precision and recall rates while minimizing false positives and negatives. YOLO showed promise in terms of speed and efficiency, particularly beneficial for real-time analysis. Vision Transformers excelled in capturing long-range dependencies within images, although they presented higher computational complexity compared to the other models. Vision Mamba emerges as the most effective model for Gleason grade classification in histopathology images, offering a balance between accuracy and computational efficiency.
ISSN:2331-8422