A real-time human bone fracture detection and classification from multi-modal images using deep learning technique

Human bone is an essential structure that allows the body to move. It is a common observation in contemporary society that bone fractures occur frequently. The doctors use X-rays, Computed Tomography scans, and Magnetic Resonance Imaging to determine the location of the broken bone. The previous met...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Applied intelligence (Dordrecht, Netherlands) Netherlands), 2024-10, Vol.54 (19), p.9269-9285
Hauptverfasser: Parvin, Shahnaj, Rahman, Abdur
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Human bone is an essential structure that allows the body to move. It is a common observation in contemporary society that bone fractures occur frequently. The doctors use X-rays, Computed Tomography scans, and Magnetic Resonance Imaging to determine the location of the broken bone. The previous method of evaluating broken bones in person was inefficient, often leading to errors. However, introducing new, advanced evaluation techniques has obliterated these issues. Consequently, it is essential to develop an automated system for identifying fractured bones. This research uses a new deep-learning model named "You Only Look Once (version 8)" to distinguish between healthy and broken bones from multi-modal images. We utilized a customized dataset named "Human Bone Fractures Multi-modal Image Dataset", which includes 641 images representing ten different classes of bone fractures. The small data set leads to an over-fitting of the model. To increase the amount of data, we utilized a data augmentation technique. Three experiments were conducted to assess the effectiveness of the model. The findings of the experiments show that the proposed study effectively identifies and classifies different types of fractures in this area. Our system attained 95% precision, 93% recall, and 92% of mean average precision. The outcomes demonstrated that the method achieves cutting-edge performance.
ISSN:0924-669X
1573-7497
DOI:10.1007/s10489-024-05588-7