Lung Tumor Localization and Visualization in Chest X-Ray Images Using Deep Fusion Network and Class Activation Mapping

Chest X-ray is a radiological clinical assessment tool that has been commonly used to detect different types of lung diseases, such as lung tumors. In this paper, we use the Segmentation-based Deep Fusion Networks and Squeeze and Excitation blocks for model training. The proposed approach uses both...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2022, Vol.10, p.1-1
Hauptverfasser: Suryani, Ade Irma, Chang, Chuan-Wang, Feng, Yu-Fan, Lin, Tin-Kwang, Lin, Chih-Wen, Cheng, Jen-Chieh, Chang, Chuan-Yu
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Chest X-ray is a radiological clinical assessment tool that has been commonly used to detect different types of lung diseases, such as lung tumors. In this paper, we use the Segmentation-based Deep Fusion Networks and Squeeze and Excitation blocks for model training. The proposed approach uses both wholes and cropped lung X-ray images and adds an attention mechanism to address the problems encountered during lesion identification, such as image misalignments, possible false positives from irrelevant objects, and the loss of small objects after image resizing. Two CNNs are used for feature extraction, and the extracted features are stitched together to form the final output, which is used to determine the presence of lung tumors in the image. Unlike previous methods which identify lesion heatmaps from X-ray images, we use the Semantic Segmentation via Gradient-Weighted Class Activation Mapping (Seg-Grad-CAM) to add semantic data for improved lung tumor localization. Experimental results show that our method achieves 98.51% accuracy and 99.01% sensitivity for classifying chest X-ray images with and without lung tumors. Furthermore, we combine the Seg-Grad-CAM and semantic segmentation for feature visualization. Experimental results show that the proposed approach achieves better results than previous methods that use weakly supervised learning for localization. The method proposed in this paper reduces the errors caused by subjective differences among radiologists, improves the efficiency of image interpretation and facilitates the making of correct treatment decisions.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2022.3224486