MDP-HML: an efficient detection method for multiple human disease using retinal fundus images based on hybrid learning techniques
Recently, medical image processing has improved the quality of medical images for disease prediction in humans. For multiple disease prediction (MDP), we propose an efficient detection using retinal fundus images with the help of hybrid machine learning techniques (MDP-HML). First, we introduce an i...
Gespeichert in:
Veröffentlicht in: | Multimedia systems 2023-06, Vol.29 (3), p.961-979 |
---|---|
1. Verfasser: | |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recently, medical image processing has improved the quality of medical images for disease prediction in humans. For multiple disease prediction (MDP), we propose an efficient detection using retinal fundus images with the help of hybrid machine learning techniques (MDP-HML). First, we introduce an improved weed optimization (IWO) algorithm for segmentation which segments disease areas from the original image. Second, we develop a salp swarm optimization (SSO) algorithm for multi-feature extraction from segmented images which enhance the prediction accuracy. Third, we illustrate a new classifier, i.e., chaotic atom search optimization-based deep learning (CAS-DL) classifier for multi-disease classification for human beings with single retinal fundus image. Finally, the performance of the proposed MDP-HML technique can be analyzed with the different retinal datasets. The corresponding results can compare with the state-of-art techniques in terms of accuracy, precession, recall and
F
-measure. The accuracy of proposed MDP-HML technique is 20%, 22.3%, 22.7% and 32.6% higher than the existing SVMGA, ANN, SVM and PNN classifiers. The sensitivity of proposed MDP-HML technique is 12%, 13%, 14% and 15% higher than the existing SVMGA, ANN, SVM and PNN classifiers. The specificity of proposed MDP-HML technique is 12.65%, 14.34%, 14.91% and 15.23% higher than the existing SVMGA, ANN, SVM and PNN classifiers. |
---|---|
ISSN: | 0942-4962 1432-1882 |
DOI: | 10.1007/s00530-022-01029-y |