StoHisNet: A hybrid multi-classification model with CNN and Transformer for gastric pathology images

•We propose a hybrid CNN–Transformer network to integrate global and local information for better results.•We successfully applied Transformer to multi-classification of gastric pathological images for the first time.•StoHisNet was evaluated on three different histopathological image datasets and ac...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Computer methods and programs in biomedicine 2022-06, Vol.221, p.106924-106924, Article 106924
Hauptverfasser: Fu, Bangkang, Zhang, Mudan, He, Junjie, Cao, Ying, Guo, Yuchen, Wang, Rongpin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:•We propose a hybrid CNN–Transformer network to integrate global and local information for better results.•We successfully applied Transformer to multi-classification of gastric pathological images for the first time.•StoHisNet was evaluated on three different histopathological image datasets and achieved promising results. Gastric cancer has high morbidity and mortality compared to other cancers. Accurate histopathological diagnosis has great significance for the treatment of gastric cancer. With the development of artificial intelligence, many researchers have applied deep learning for the classification of gastric cancer pathological images. However, most studies have used binary classification on pathological images of gastric cancer, which is insufficient with respect to the clinical requirements. Therefore, we proposed a multi-classification method based on deep learning with more practical clinical value. In this study, we developed a novel multi-scale model called StoHisNet based on Transformer and the convolutional neural network (CNN) for the multi-classification task. StoHisNet adopts Transformer to learn global features to alleviate the inherent limitations of the convolution operation. The proposed StoHisNet can classify the publicly available pathological images of a gastric dataset into four categories —normal tissue, tubular adenocarcinoma, mucinous adenocarcinoma, and papillary adenocarcinoma. The accuracy, F1-score, recall, and precision of the proposed model in the public gastric pathological image dataset were 94.69%, 94.96%, 94.95%, and 94.97%, respectively. We conducted additional experiments using two other public datasets to verify the generalization ability of the model. On the BreakHis dataset, our model performed better compared with other classification models, and the accuracy was 91.64%. Similarly, on the four-classification task on the Endometrium dataset, our model showed better classification ability than others with accuracy of 81.74%. These experiments showed that the proposed model has excellent ability of classification and generalization. The StoHisNet model had high performance in the multi-classification on gastric histopathological images and showed strong generalization ability on other pathological datasets. This model may be a potential tool to assist pathologists in the analysis of gastric histopathological images.
ISSN:0169-2607
1872-7565
DOI:10.1016/j.cmpb.2022.106924