Multi‐Wound Classification: Exploring Image Enhancement and Deep Learning Techniques

ABSTRACT Wounds contribute to 30%–42% of hospital visits and 9% of deaths but remain underreported in Africa. Diseases and surgeries increase wound prevalence, especially in rural areas where 27%–82% of people live, and health facilities are poor or non‐existent. This research aims to design a disea...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Engineering reports (Hoboken, N.J.) N.J.), 2025-01, Vol.7 (1), p.n/a
Hauptverfasser: Odame, Prince, Ahiamadzor, Maxwell Mawube, Derkyi, Nana Kwaku Baah, Boateng, Kofi Agyekum, Sarfo‐Acheampong, Kelvin, Tchao, Eric Tutu, Agbemenu, Andrew Selasi, Nunoo‐Mensah, Henry, Agyapong, Dorothy Araba Yakoba, Kponyo, Jerry John
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:ABSTRACT Wounds contribute to 30%–42% of hospital visits and 9% of deaths but remain underreported in Africa. Diseases and surgeries increase wound prevalence, especially in rural areas where 27%–82% of people live, and health facilities are poor or non‐existent. This research aims to design a disease‐related wound classification model for online diagnosis and telemedicine support for traditional health practitioners and village health workers. This paper focuses on wounds from diabetic ulcers, pressure ulcers, surgery, and venous ulcers. The approaches used included Contrast Limited Adaptive Histogram Equalization (CLAHE) with machine and deep learning models, Discrete Wavelet Transformations (DWT) with a novel Gated Wavelet Convolutional Neural Network (CNN) model, and FixCaps, an improved version of Capsule Networks utilizing Convolutional Block Attention Module (CBAM) to reduce spatial information loss. The performance metrics showed similar results for the first two approaches, but FixCaps was the most proficient, with accuracy, precision, recall, and F‐score of 93.83%, 95.41%, 88.63%, and 90.93% respectively. FixCaps had trainable parameters of about 8.28 MB compared with the 195.64 MB of the Gated Wavelet CNN Model. Wound classification models using image enhancement and deep learning techniques were designed. FixCaps was the most proficient model with accuracies of 93.33% with trainable parameters of 8.28 MB compared with 195.64 MB of the gated wavelet CNN model. This can be used in virtual medicine to support rural health workers.
ISSN:2577-8196
2577-8196
DOI:10.1002/eng2.70001