Vector Decomposed Long Short-Term Memory Model for Behavioral Modeling and Digital Predistortion for Wideband RF Power Amplifiers

This paper proposes two novel vector decomposed neural network models for behavioral modeling and digital predistortion (DPD) of radio-frequency (RF) power amplifiers (PAs): vector decomposed long short-term memory (VDLSTM) model and simplified vector decomposed long short-term memory (SVDLSTM) mode...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2020, Vol.8, p.63780-63789
Hauptverfasser: Li, Hongmin, Zhang, Yikang, Li, Gang, Liu, Falin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:This paper proposes two novel vector decomposed neural network models for behavioral modeling and digital predistortion (DPD) of radio-frequency (RF) power amplifiers (PAs): vector decomposed long short-term memory (VDLSTM) model and simplified vector decomposed long short-term memory (SVDLSTM) model. The proposed VDLSTM model is a variant of the classic long short-term memory (LSTM) model that can model long-term memory effects. To comply with the physical mechanism of RF PAs, VDLSTM model only conducts nonlinear operations on the magnitudes of the input signals, while the phase information is recovered by linear weighting operations on the output of the LSTM cell. Furthermore, this study modifies the LSTM cell by adding phase recovery operations inside the cell and replacing the original hidden state with the output magnitudes that are recovered with phase information. With the modified LSTM cell, a low-complexity SVDLSTM model is proposed. The experiment results show that the proposed VDLSTM model can achieve better linearization performance compared with the state-of-the-art models when linearizing PAs with wideband inputs. Besides, in wideband scenarios, SVDLSTM model with much fewer parameters can present comparable linearzation performance compared to VDLSTM model.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2020.2984682