Diabetic Retinopathy Grading by a Source-Free Transfer Learning Approach
Diabetic retinopathy (DR) gives rise to blindness in young adults around the world. By early detection, patients with DR can be properly treated in time, and the deterioration of DR can be prevented. Thus, early and accurate DR screening is critical for disease prognosis. However, traditional manual...
Gespeichert in:
Veröffentlicht in: | Biomedical signal processing and control 2022-03, Vol.73, p.103423, Article 103423 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Diabetic retinopathy (DR) gives rise to blindness in young adults around the world. By early detection, patients with DR can be properly treated in time, and the deterioration of DR can be prevented. Thus, early and accurate DR screening is critical for disease prognosis. However, traditional manual detection work is intensive and easy to cause misdiagnosis for large amounts of patients. In recent years, deep learning methods have achieved remarkable improvements in medical image analysis, making DR detection more reliable and efficient. Nevertheless, existing supervised learning and transfer learning methods require a great deal of labeled data, which is always not available in DR screening due to the challenges of medical annotating and privacy issues. To solve this problem, we design a Source-Free Transfer Learning (SFTL) method for referable DR detection, which utilizes unannotated retinal images and only employs source model throughout the training process. In this paper, we propose two major modules, namely, the target generation module and the collaborative consistency module. For the target generation module, it can produce target-style retinal images, trained by the inputting target data and source model. For the collaborative consistency module, the classification model is further optimized by the generated target-style images, which guides the generator to produce images with more accurate expression. Furthermore, a target reconstruction loss is attached on the generator to enhance the performance, and a feature consistency loss is introduced to make the target model not drift far away from the source model. To evaluate the effectiveness of the SFTL model, we have carried out extensive experiments on APTOS 2019 dataset with a source model from EyePACS dataset, and obtained an accuracy of 91.2%, a sensitivity of 0.951 and a specificity of 0.858, demonstrating that our proposed SFTL model is more competitive than other state-of-the-art supervised learning methods. |
---|---|
ISSN: | 1746-8094 1746-8108 |
DOI: | 10.1016/j.bspc.2021.103423 |