Classification of stomach infections: A paradigm of convolutional neural network along with classical features fusion and selection
Automated detection and classification of gastric infections (i.e., ulcer, polyp, esophagitis, and bleeding) through wireless capsule endoscopy (WCE) is still a key challenge. Doctors can identify these endoscopic diseases by using the computer‐aided diagnostic (CAD) systems. In this article, a new...
Gespeichert in:
Veröffentlicht in: | Microscopy research and technique 2020-05, Vol.83 (5), p.562-576 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Automated detection and classification of gastric infections (i.e., ulcer, polyp, esophagitis, and bleeding) through wireless capsule endoscopy (WCE) is still a key challenge. Doctors can identify these endoscopic diseases by using the computer‐aided diagnostic (CAD) systems. In this article, a new fully automated system is proposed for the recognition of gastric infections through multi‐type features extraction, fusion, and robust features selection. Five key steps are performed—database creation, handcrafted and convolutional neural network (CNN) deep features extraction, a fusion of extracted features, selection of best features using a genetic algorithm (GA), and recognition. In the features extraction step, discrete cosine transform, discrete wavelet transform strong color feature, and VGG16‐based CNN features are extracted. Later, these features are fused by simple array concatenation and GA is performed through which best features are selected based on K‐Nearest Neighbor fitness function. In the last, best selected features are provided to Ensemble classifier for recognition of gastric diseases. A database is prepared using four datasets—Kvasir, CVC‐ClinicDB, Private, and ETIS‐LaribPolypDB with four types of gastric infections such as ulcer, polyp, esophagitis, and bleeding. Using this database, proposed technique performs better as compared to existing methods and achieves an accuracy of 96.5%.
Strong color features are extracted along with shape features.
Convolutional neural network features are extracted through a pretrained model.
All features are fused through array‐based method.
Genetic algorithm is performed for the selection of robust features. |
---|---|
ISSN: | 1059-910X 1097-0029 |
DOI: | 10.1002/jemt.23447 |