Domain affiliated distilled knowledge transfer for improved convergence of Ph-negative MPN identifier

Ph-negative Myeloproliferative Neoplasm is a rare yet dangerous disease that can turn into more severe forms of disorders later on. Clinical diagnosis of the disease exists but often requires collecting multiple types of pathologies which can be tedious and time-consuming. Meanwhile, studies on deep...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:PloS one 2024-09, Vol.19 (9), p.e0303541
Hauptverfasser: Reza, Md Tanzim, Alam, Md Golam Rabiul, Rahman, Rafeed, Dipto, Shakib Mahmud
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Ph-negative Myeloproliferative Neoplasm is a rare yet dangerous disease that can turn into more severe forms of disorders later on. Clinical diagnosis of the disease exists but often requires collecting multiple types of pathologies which can be tedious and time-consuming. Meanwhile, studies on deep learning-based research are rare and often need to rely on a small amount of pathological data due to the rarity of the disease. In addition, the existing research works do not address the data scarcity issue apart from using common techniques like data augmentation, which leaves room for performance improvement. To tackle the issue, the proposed research aims to utilize distilled knowledge learned from a larger dataset to boost the performance of a lightweight model trained on a small MPN dataset. Firstly, a 50-layer ResNet model is trained on a large lymph node image dataset of 3,27,680 images, followed by the trained knowledge being distilled to a small 4-layer CNN model. Afterward, the CNN model is initialized with the pre-trained weights to further train on a small MPN dataset of 300 images. Empirical analysis showcases that the CNN with distilled knowledge achieves 97% accuracy compared to 89.67% accuracy achieved by a clone CNN trained from scratch. The distilled knowledge transfer approach also proves to be more effective than more simple data scarcity handling approaches such as augmentation and manual feature extraction. Overall, the research affirms the effectiveness of transferring distilled knowledge to address the data scarcity issue and achieves better convergence when training on a Ph-Negative MPN image dataset with a lightweight model.
ISSN:1932-6203
1932-6203
DOI:10.1371/journal.pone.0303541