A lightweight model for the retinal disease classification using optical coherence tomography
•Proposing a lightweight retinal optical coherence tomography image classification model integrated CNN and Transformer.•Encoding the local lesion features with the global representation of OCT images.•Integrating a convolutional block attention module to enhance the representational power.•Our mode...
Gespeichert in:
Veröffentlicht in: | Biomedical signal processing and control 2025-03, Vol.101, p.107146, Article 107146 |
---|---|
Hauptverfasser: | , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | •Proposing a lightweight retinal optical coherence tomography image classification model integrated CNN and Transformer.•Encoding the local lesion features with the global representation of OCT images.•Integrating a convolutional block attention module to enhance the representational power.•Our model has 1.28 M parameters, and needs 2.5 ms to predict an image.•Outperforming five state-of-the-art models on OCT2017 dataset and three classical models on OCT-C8 dataset, respectively.
Retinal diseases such as age-related macular degeneration and diabetic macular edema will lead to irreversible blindness without timely diagnosis and treatment. Optical coherence tomography (OCT) has been widely utilized to detect retinal diseases because of its non-contact and non-invasive imaging peculiarities. Due to the lack of ophthalmic medical resources, automatic analyzing and diagnosing retinal OCT images is necessary with computer-aided diagnosis algorithms. In this study, we propose a lightweight retinal OCT image classification model integrating convolutional neural network (CNN) and Transformer to classify various retinal diseases with few parameters of the model. Local lesion features extracted by CNN can be encoded with the whole OCT image through the Transformer, which improves the classification ability. A convolutional block attention module is also integrated into our model to enhance the representational power. Compared with several classical models, our model achieves the best accuracy of 0.9800 and recall of 0.9799 with the least number of parameters and prediction time for an image on the OCT-C8 dataset. Moreover, on the OCT2017 dataset, our model outperforms the four state-of-the-art models except almost equal to another, achieving an average accuracy, precision, recall, specificity and F1-score of 0.9985, 0.9970, 0.9970, 0.9990, and 0.9970. Simultaneously, the number of parameters of our model has been reduced to just 1.28 M, and the average prediction time for an image is only 2.5 ms. |
---|---|
ISSN: | 1746-8094 |
DOI: | 10.1016/j.bspc.2024.107146 |