Efficient Machine Learning-Enhanced Channel Estimation for OFDM Systems

Recently much research work has focused on employing deep learning (DL) algorithms to perform channel estimation in the upcoming 6G communication systems. However, these DL algorithms are usually computationally demanding and require a large number of training samples. Hence, this work investigates...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2021, Vol.9, p.100839-100850
Hauptverfasser: Jebur, Bilal A., Alkassar, Sinan H., Abdullah, Mohammed A. M., Tsimenidis, Charalampos C.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Recently much research work has focused on employing deep learning (DL) algorithms to perform channel estimation in the upcoming 6G communication systems. However, these DL algorithms are usually computationally demanding and require a large number of training samples. Hence, this work investigates the feasibility of designing efficient machine learning (ML) algorithms that can effectively estimate and track time-varying, frequency-selective channels. The proposed algorithm is integrated with orthogonal frequency-division multiplexing (OFDM) to eliminate intersymbol interference (ISI) induced by the frequency-selective multipath channel and compared with the well-known least square (LS) and linear minimum mean square error (LMMSE) channel estimation algorithms. The obtained results have demonstrated that even when a small number of pilot samples, N_{P} , is inserted before the N subcarriers OFDM symbol, the introduced ML-based channel estimation is superior to the LS and LMMSE algorithms. This dominance is reflected in the bit-error-rate (BER) performance of the proposed algorithm, which attains a gain of 2.5 dB and 5.5 dB over the LMMSE and LS algorithms, respectively, when N_{P}=\frac {N}{8} . Furthermore, the BER performance of the proposed algorithm is shown to degrade by only 0.2 dB when the maximum Doppler frequency is randomly varied. Finally, the number of iterations required by the proposed algorithm to converge to the smallest achievable mean-squared error (MSE) are thoroughly examined for various signal-to-noise ratio (SNR) levels.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2021.3097436