Multi-Head Transformer Architecture with Higher Dimensional Feature Representation for Massive MIMO CSI Feedback

To achieve the anticipated performance of massive multiple input multiple output (MIMO) systems in wireless communication, it is imperative that the user equipment (UE) accurately feeds the channel state information (CSI) back to the base station (BS) along the uplink. To reduce the feedback overhea...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Applied sciences 2024-02, Vol.14 (4), p.1356
Hauptverfasser: Chen, Qing, Guo, Aihuang, Cui, Yaodong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:To achieve the anticipated performance of massive multiple input multiple output (MIMO) systems in wireless communication, it is imperative that the user equipment (UE) accurately feeds the channel state information (CSI) back to the base station (BS) along the uplink. To reduce the feedback overhead, an increasing number of deep learning (DL)-based networks have emerged, aimed at compressing and subsequently recovering CSI. Various novel structures are introduced, among which Transformer architecture has enabled a new level of precision in CSI feedback. In this paper, we propose a new method named TransNet+ built upon the Transformer-based TransNet by updating the multi-head attention layer and implementing an improved training scheme. The simulation results demonstrate that TransNet+ outperforms existing methods in terms of recovery accuracy and achieves state-of-the-art.
ISSN:2076-3417
2076-3417
DOI:10.3390/app14041356