An Explainable Deep Learning Method for Microwave Head Stroke Localization
In this article, an explainable deep learning scheme is proposed to tackle microwave imaging for the task of multiple object localisation. Deep learning has been involved in solving microwave imaging tasks due to its strong pattern recognition capabilities. However, the lack of explainability of the...
Gespeichert in:
Veröffentlicht in: | IEEE journal of electromagnetics, RF and microwaves in medicine and biology RF and microwaves in medicine and biology, 2023-12, Vol.7 (4), p.1-8 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this article, an explainable deep learning scheme is proposed to tackle microwave imaging for the task of multiple object localisation. Deep learning has been involved in solving microwave imaging tasks due to its strong pattern recognition capabilities. However, the lack of explainability of the model's predictions makes it infeasible to deploy deep learning models in practical applications such as stroke detection and localisation as the model is a black box, the confidence of the output is unknown as they cannot be verified. This article aims to alleviate this concern by applying the gradient-weighted class activation map (Grad-CAM), an explainable artificial intelligence technique, together with the Delay-Multiply-And-Sum (DMAS) algorithm to spatially explain the deep learning model. The Grad-CAM method highlights the important parts of the input signal for decision making and the important parts are mapped to the image domain to provide a more intuitive understanding of the model. This article concludes that the deep learning model learns from reliable information and provides outputs which have a physical basis. |
---|---|
ISSN: | 2469-7249 2469-7257 |
DOI: | 10.1109/JERM.2023.3287681 |