Floating-point error analysis of recursive least-squares and least-mean-squares adaptive filters
A floating-point error analysis of the Recursive LeastSquares (RLS) and Least-Mean-Squares (LMS) algorithms is presented. Both the prewindowed growing memory RLS algorithm (\lambda = 1) for stationary systems and the exponentially windowed RLS algorithm (\lambda < 1) for time-varying systems are...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on circuits and systems 1986-12, Vol.33 (12), p.1192-1208 |
---|---|
1. Verfasser: | |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | A floating-point error analysis of the Recursive LeastSquares (RLS) and Least-Mean-Squares (LMS) algorithms is presented. Both the prewindowed growing memory RLS algorithm (\lambda = 1) for stationary systems and the exponentially windowed RLS algorithm (\lambda < 1) for time-varying systems are studied. For both algorithms, the expression for the mean-square prediction error and the expected value of the weight error vector norm are derived in terms of the variance of the floating-point noise sources. The results point to a tradeoff in the choice of the forgetting factor \lambda . In order to reduce the effects of additive noise and the floatingpoint noise due to the inner product calculation of the desired signal, \lambda must be chosen close to one. On the other hand, the floating-point noise due to floating-point addition in the weight vector update recursion increases as \lambda \rightarrow 1 . Floating point errors in the calculation of the weight vector correction term, however, do not affect the steady-state error and have a transient effect. For the prewindowed growing memory RLS algorithm, exponential divergence may occur due to errors in the floatingpoint addition in the weight vector update recursion. Conditions for weight vector updating termination are also presented for stationary systems. The results for the LMS algorithm show that the excess mean-square error due to floating-point arithmetic increases inversely to loop gain for errors introduced by the summation in the weight vector recursion. The calculation of the desired signal prediction and prediction error lead to an additive noise term as in the RLS algorithm. Simulations are presented which confirm the theoretical findings of the paper. |
---|---|
ISSN: | 0098-4094 1558-1276 |
DOI: | 10.1109/TCS.1986.1085877 |