Towards Explainable AI for Channel Estimation in Wireless Communications
Research into 6G networks has been initiated to support a variety of critical artificial intelligence (AI) assisted applications such as autonomous driving. In such applications, AI-based decisions should be performed in a real-time manner. These decisions include resource allocation, localization,...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Research into 6G networks has been initiated to support a variety of critical
artificial intelligence (AI) assisted applications such as autonomous driving.
In such applications, AI-based decisions should be performed in a real-time
manner. These decisions include resource allocation, localization, channel
estimation, etc. Considering the black-box nature of existing AI-based models,
it is highly challenging to understand and trust the decision-making behavior
of such models. Therefore, explaining the logic behind those models through
explainable AI (XAI) techniques is essential for their employment in critical
applications. This manuscript proposes a novel XAI-based channel estimation
(XAI-CHEST) scheme that provides detailed reasonable interpretability of the
deep learning (DL) models that are employed in doubly-selective channel
estimation. The aim of the proposed XAI-CHEST scheme is to identify the
relevant model inputs by inducing high noise on the irrelevant ones. As a
result, the behavior of the studied DL-based channel estimators can be further
analyzed and evaluated based on the generated interpretations. Simulation
results show that the proposed XAI-CHEST scheme provides valid interpretations
of the DL-based channel estimators for different scenarios. |
---|---|
DOI: | 10.48550/arxiv.2307.00952 |