MSE Bounds With Affine Bias Dominating the CramÉr-Rao Bound

In continuation to an earlier work, we further develop bounds on the mean-squared error (MSE) when estimating a deterministic parameter vector thetas 0 in a given estimation problem, as well as estimators that achieve the optimal performance. The traditional Cramer-Rao (CR) type bounds provide bench...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on signal processing 2008-08, Vol.56 (8), p.3824-3836
1. Verfasser: Eldar, Y.C.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In continuation to an earlier work, we further develop bounds on the mean-squared error (MSE) when estimating a deterministic parameter vector thetas 0 in a given estimation problem, as well as estimators that achieve the optimal performance. The traditional Cramer-Rao (CR) type bounds provide benchmarks on the variance of any estimator of thetas 0 under suitable regularity conditions, while requiring a priori specification of a desired bias gradient. To circumvent the need to choose the bias, which is impractical in many applications, it was suggested in our earlier work to directly treat the MSE, which is the sum of the variance and the squared-norm of the bias. While previously we developed MSE bounds assuming a linear bias vector, here we study, in the same spirit, affine bias vectors. We demonstrate through several examples that allowing for an affine transformation can often improve the performance significantly over a linear approach. Using convex optimization tools we show that in many cases we can choose an affine bias that results in an MSE bound that is smaller than the unbiased CR bound for all values of thetas 0 . Furthermore, we explicitly construct estimators that achieve these bounds in cases where an efficient estimator exists, by performing an affine transformation of the standard maximum likelihood (ML) estimator. This leads to estimators that result in a smaller MSE than ML for all possible values of thetas 0 .
ISSN:1053-587X
1941-0476
DOI:10.1109/TSP.2008.925584