Brain tumor segmentation using UNet-few shot schematic segmentation

Early finding and determination of a proper therapy technique will build the endurance of people with cancer. A key step in the diagnosis and treatment of brain tumors is accurate and reliable segmentation. Given its uneven shape and opaque borders, gliomas are among the most difficult brain cancers...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:ITM web of conferences 2023, Vol.56, p.4006
Hauptverfasser: L K, Pavithra, Paramanandham, Nirmala, Sharan, Tanya, Kumar Sarkar, Ronit, Gupta, Samraj
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Early finding and determination of a proper therapy technique will build the endurance of people with cancer. A key step in the diagnosis and treatment of brain tumors is accurate and reliable segmentation. Given its uneven shape and opaque borders, gliomas are among the most difficult brain cancers to detect. Because of significant differences in their design, programmed division of glioma brain growths is a fluid topic. Improved UNet-based designs for the automatic segmentation of brain tumors from MRI images are reported in this article. Training semantic division models requires an enormous measure of finely clarified information, making it challenging to quickly acclimatize to unfamiliar classes that don’t meet this requirement. The original Few Shot Segmentation attempts to address this issue but has other flaws. Hence in this paper a generalized Few-Shot Schematic Segmentation is discussed to break down the speculation capacity of at the same time sectioning the original classifications with the base classes and adequate models. A Context-Aware Prototype Learning (CAPL) which is used for improving the performance by utilizing the co-occurrence of earlier information from help tests and progressively enhancing logical data to the classifier, molded on the substance of each question picture. Results reveal the outperformance of the developed model.
ISSN:2271-2097
2431-7578
2271-2097
DOI:10.1051/itmconf/20235604006