Spotting malignancies from gastric endoscopic images using deep learning
Background Gastric cancer is a common kind of malignancies, with yearly occurrences exceeding one million worldwide in 2017. Typically, ulcerous and cancerous tissues develop abnormal morphologies through courses of progression. Endoscopy is a routinely adopted means for examination of gastrointesti...
Gespeichert in:
Veröffentlicht in: | Surgical endoscopy 2019-11, Vol.33 (11), p.3790-3797 |
---|---|
Hauptverfasser: | , , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Background
Gastric cancer is a common kind of malignancies, with yearly occurrences exceeding one million worldwide in 2017. Typically, ulcerous and cancerous tissues develop abnormal morphologies through courses of progression. Endoscopy is a routinely adopted means for examination of gastrointestinal tract for malignancy. Early and timely detection of malignancy closely correlate with good prognosis. Repeated presentation of similar frames from gastrointestinal tract endoscopy often weakens attention for practitioners to result in true patients missed out to incur higher medical cost and unnecessary morbidity. Highly needed is an automatic means for spotting visual abnormality and prompts for attention for medical staff for more thorough examination.
Methods
We conduct classification of benign ulcer and cancer for gastrointestinal endoscopic color images using deep neural network and transfer-learning approach. Using clinical data gathered from Gil Hospital, we built a dataset comprised of 200 normal, 367 cancer, and 220 ulcer cases, and applied the inception, ResNet, and VGGNet models pretrained on ImageNet. Three classes were defined—normal, benign ulcer, and cancer, and three separate binary classifiers were built—those for normal vs cancer, normal vs ulcer, and cancer vs ulcer for the corresponding classification tasks. For each task, considering inherent randomness entailed in the deep learning process, we performed data partitioning and model building experiments 100 times and averaged the performance values.
Results
Areas under curves of respective receiver operating characteristics were 0.95, 0.97, and 0.85 for the three classifiers. The ResNet showed the highest level of performance. The cases involving normal, i.e., normal vs ulcer and normal vs cancer resulted in accuracies above 90%. The case of ulcer vs cancer classification resulted in a lower accuracy of 77.1%, possibly due to smaller difference in appearance than those cases involving normal.
Conclusions
The overall level of performance of the proposed method was very promising to encourage applications in clinical environments. Automatic classification using deep learning technique as proposed can be used to complement manual inspection efforts for practitioners to minimize dangers of missed out positives resulting from repetitive sequence of endoscopic frames and weakening attentions. |
---|---|
ISSN: | 0930-2794 1432-2218 |
DOI: | 10.1007/s00464-019-06677-2 |