Asymptotic Behavior of Recursive State Estimations With Intermittent Measurements

Convergence rate and stationary distribution approximation are investigated in this paper for both the covariance matrix of the Kalman filter and the pseudo-covariance matrix of a robust recursive state estimator developed by us. When the measurement dropping process is described by a Markov chain a...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on automatic control 2016-02, Vol.61 (2), p.400-415
1. Verfasser: Zhou, Tong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Convergence rate and stationary distribution approximation are investigated in this paper for both the covariance matrix of the Kalman filter and the pseudo-covariance matrix of a robust recursive state estimator developed by us. When the measurement dropping process is described by a Markov chain and an associated plant is both controllable and observable, it is proved that if the dropping probability is less than 1, these two matrices converge exponentially to a stationary distribution independent of its initial values. Moreover, when they are initialized with the stabilizing solution of an associated algebraic Riccati equation, both of them are shown to converge to an ergodic process. Based on these results, two approximations are derived for their stationary probability distributions, as well as a bound of approximation errors. It has been made clear that rather than the delta function suggested by Kar and Moura, a series of delta functions give a more accurate approximation, and replacement of these estimators' update gain by a constant matrix usually deteriorates their steady estimation accuracy. A numerical example is provided to illustrate the obtained theoretical results.
ISSN:0018-9286
1558-2523
DOI:10.1109/TAC.2015.2434071