Application of convolutional neural networks for evaluating the depth of invasion of early gastric cancer based on endoscopic images

Background and Aim Recently, artificial intelligence (AI) has been used in endoscopic examination and is expected to help in endoscopic diagnosis. We evaluated the feasibility of AI using convolutional neural network (CNN) systems for evaluating the depth of invasion of early gastric cancer (EGC), b...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of gastroenterology and hepatology 2022-02, Vol.37 (2), p.352-357
Hauptverfasser: Hamada, Kenta, Kawahara, Yoshiro, Tanimoto, Takayoshi, Ohto, Akimitsu, Toda, Akira, Aida, Toshiaki, Yamasaki, Yasushi, Gotoda, Tatsuhiro, Ogawa, Taiji, Abe, Makoto, Okanoue, Shotaro, Takei, Kensuke, Kikuchi, Satoru, Kuroda, Shinji, Fujiwara, Toshiyoshi, Okada, Hiroyuki
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Background and Aim Recently, artificial intelligence (AI) has been used in endoscopic examination and is expected to help in endoscopic diagnosis. We evaluated the feasibility of AI using convolutional neural network (CNN) systems for evaluating the depth of invasion of early gastric cancer (EGC), based on endoscopic images. Methods This study used a deep CNN model, ResNet152. From patients who underwent treatment for EGC at our hospital between January 2012 and December 2016, we selected 100 consecutive patients with mucosal (M) cancers and 100 consecutive patients with cancers invading the submucosa (SM cancers). A total of 3508 non‐magnifying endoscopic images of EGCs, including white‐light imaging, linked color imaging, blue laser imaging‐bright, and indigo‐carmine dye contrast imaging, were included in this study. A total of 2288 images from 132 patients served as the development dataset, and 1220 images from 68 patients served as the testing dataset. Invasion depth was evaluated for each image and lesion. The majority vote was applied to lesion‐based evaluation. Results The sensitivity, specificity, and accuracy for diagnosing M cancer were 84.9% (95% confidence interval [CI] 82.3%–87.5%), 70.7% (95% CI 66.8%–74.6%), and 78.9% (95% CI 76.6%–81.2%), respectively, for image‐based evaluation, and 85.3% (95% CI 73.4%–97.2%), 82.4% (95% CI 69.5%–95.2%), and 83.8% (95% CI 75.1%–92.6%), respectively, for lesion‐based evaluation. Conclusions The application of AI using CNN to evaluate the depth of invasion of EGCs based on endoscopic images is feasible, and it is worth investing more effort to put this new technology into practical use.
ISSN:0815-9319
1440-1746
DOI:10.1111/jgh.15725