Multi-modal fusion of deep transfer learning based COVID-19 diagnosis and classification using chest x-ray images
COVID-19 pandemic has a significant impact on the global health and daily lives of people living over the globe. Several initial tests are based on the detecting of the genetic material of the coronavirus, and they have a minimum detection rate with a time-consuming process. To overcome this issue,...
Gespeichert in:
Veröffentlicht in: | Multimedia tools and applications 2023-03, Vol.82 (8), p.12653-12677 |
---|---|
Hauptverfasser: | , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | COVID-19 pandemic has a significant impact on the global health and daily lives of people living over the globe. Several initial tests are based on the detecting of the genetic material of the coronavirus, and they have a minimum detection rate with a time-consuming process. To overcome this issue, radiological images are recommended where chest X-rays (CXRs) are employed in the diagnostic process. This article introduces a new Multi-modal fusion of deep transfer learning (MMF-DTL) technique to classify COVID-19. The proposed MMF-DTL model involves three main processes, namely pre-processing, feature extraction, and classification. The MMF-DTL model uses three DL models namely VGG16, Inception v3, and ResNet 50 for feature extraction. Since a single modality would not be adequate to attain an effective detection rate, the integration of three approaches by the use of decision-based multimodal fusion increases the detection rate. So, a fusion of three DL models takes place to further improve the detection rate. Finally, a softmax classifier is employed for test images to a set of six different. A wide range of experimental result analyses is carried out on the Chest-X-Ray dataset. The proposed fusion model is found to be an effective tool for COVID-19 diagnosis using radiological images with the average
sens
y
of 92.96%,
spec
y
of 98.54%,
prec
n
of 93.60%,
accu
y
of 98.80%,
F
score
of 93.26% and kappa of 91.86%. |
---|---|
ISSN: | 1380-7501 1573-7721 |
DOI: | 10.1007/s11042-022-13739-6 |