Improved classification and localization approach to small bowel capsule endoscopy using convolutional neural network

Background Although great advances in artificial intelligence for interpreting small bowel capsule endoscopy (SBCE) images have been made in recent years, its practical use is still limited. The aim of this study was to develop a more practical convolutional neural network (CNN) algorithm for the au...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Digestive endoscopy 2021-05, Vol.33 (4), p.598-607
Hauptverfasser: Hwang, Yunseob, Lee, Han Hee, Park, Chunghyun, Tama, Bayu Adhi, Kim, Jin Su, Cheung, Dae Young, Chung, Woo Chul, Cho, Young‐Seok, Lee, Kang‐Moon, Choi, Myung‐Gyu, Lee, Seungchul, Lee, Bo‐In
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Background Although great advances in artificial intelligence for interpreting small bowel capsule endoscopy (SBCE) images have been made in recent years, its practical use is still limited. The aim of this study was to develop a more practical convolutional neural network (CNN) algorithm for the automatic detection of various small bowel lesions. Methods A total of 7556 images were collected for the training dataset from 526 SBCE videos. Abnormal images were classified into two categories: hemorrhagic lesions (red spot/angioectasia/active bleeding) and ulcerative lesions (erosion/ulcer/stricture). A CNN algorithm based on VGGNet was trained in two different ways: the combined model (hemorrhagic and ulcerative lesions trained separately) and the binary model (all abnormal images trained without discrimination). The detected lesions were visualized using a gradient class activation map (Grad‐CAM). The two models were validated using 5,760 independent images taken at two other academic hospitals. Results Both the combined and binary models acquired high accuracy for lesion detection, and the difference between the two models was not significant (96.83% vs 96.62%, P = 0.122). However, the combined model showed higher sensitivity (97.61% vs 95.07%, P 
ISSN:0915-5635
1443-1661
DOI:10.1111/den.13787