Identification and the Information Matrix: How to Get Just Sufficiently Rich?
In prediction error identification, the information matrix plays a central role. Specifically, when the system is in the model set, the covariance matrix of the parameter estimates converges asymptotically, up to a scaling factor, to the inverse of the information matrix. The existence of a finite c...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on automatic control 2009-12, Vol.54 (12), p.2828-2840 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In prediction error identification, the information matrix plays a central role. Specifically, when the system is in the model set, the covariance matrix of the parameter estimates converges asymptotically, up to a scaling factor, to the inverse of the information matrix. The existence of a finite covariance matrix thus depends on the positive definiteness of the information matrix, and the rate of convergence of the parameter estimate depends on its ¿size¿. The information matrix is also the key tool in the solution of optimal experiment design procedures, which have become a focus of recent attention. Introducing a geometric framework, we provide a complete analysis, for arbitrary model structures, of the minimum degree of richness required to guarantee the nonsingularity of the information matrix. We then particularize these results to all commonly used model structures, both in open loop and in closed loop. In a closed-loop setup, our results provide an unexpected and precisely quantifiable trade-off between controller degree and required degree of external excitation. |
---|---|
ISSN: | 0018-9286 1558-2523 |
DOI: | 10.1109/TAC.2009.2034199 |