Triple attention learning for classification of 14 thoracic diseases using chest radiography

•Propose triple attention n etwork (A 3 Net) for thoracic disease diagnosis on chest X ray.•Learn channel wise, element wise, and scale wise attention s imultaneously.•Incorporate three attention learning mechanisms in to a deep classification model.•A chiev e highest average AUC on the ChestX ray14...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Medical image analysis 2021-01, Vol.67, p.101846-101846, Article 101846
Hauptverfasser: Wang, Hongyu, Wang, Shanshan, Qin, Zibo, Zhang, Yanning, Li, Ruijiang, Xia, Yong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:•Propose triple attention n etwork (A 3 Net) for thoracic disease diagnosis on chest X ray.•Learn channel wise, element wise, and scale wise attention s imultaneously.•Incorporate three attention learning mechanisms in to a deep classification model.•A chiev e highest average AUC on the ChestX ray14 dataset without using external data. [Display omitted] Chest X-ray is the most common radiology examinations for the diagnosis of thoracic diseases. However, due to the complexity of pathological abnormalities and lack of detailed annotation of those abnormalities, computer-aided diagnosis (CAD) of thoracic diseases remains challenging. In this paper, we propose the triple-attention learning (A 3 Net) model for this CAD task. This model uses the pre-trained DenseNet-121 as the backbone network for feature extraction, and integrates three attention modules in a unified framework for channel-wise, element-wise, and scale-wise attention learning. Specifically, the channel-wise attention prompts the deep model to emphasize the discriminative channels of feature maps; the element-wise attention enables the deep model to focus on the regions of pathological abnormalities; the scale-wise attention facilitates the deep model to recalibrate the feature maps at different scales. The proposed model has been evaluated on 112,120images in the ChestX-ray14 dataset with the official patient-level data split. Compared to state-of-the-art deep learning models, our model achieves the highest per-class AUC in classifying 13 out of 14 thoracic diseases and the highest average per-class AUC of 0.826 over 14 thoracic diseases.
ISSN:1361-8415
1361-8423
DOI:10.1016/j.media.2020.101846