Assessing the calibration of mortality benchmarks in critical care: The Hosmer-Lemeshow test revisited

OBJECTIVE:To examine the Hosmer-Lemeshow testʼs sensitivity in evaluating the calibration of models predicting hospital mortality in large critical care populations. DESIGN:Simulation study. SETTING:Intensive care unit databases used for predictive modeling. PATIENTS:Data sets were simulated represe...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Critical care medicine 2007-09, Vol.35 (9), p.2052-2056
Hauptverfasser: Kramer, Andrew A, Zimmerman, Jack E
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:OBJECTIVE:To examine the Hosmer-Lemeshow testʼs sensitivity in evaluating the calibration of models predicting hospital mortality in large critical care populations. DESIGN:Simulation study. SETTING:Intensive care unit databases used for predictive modeling. PATIENTS:Data sets were simulated representing the approximate number of patients used in earlier versions of critical care predictive models (n = 5,000 and 10,000) and more recent predictive models (n = 50,000). Each patient had a hospital mortality probability generated as a function of 23 risk variables. INTERVENTIONS:None. MEASUREMENTS AND MAIN RESULTS:Data sets of 5,000, 10,000, and 50,000 patients were replicated 1,000 times. Logistic regression models were evaluated for each simulated data set. This process was initially carried out under conditions of perfect fit (observed mortality = predicted mortality; standardized mortality ratio = 1.000) and repeated with an observed mortality that differed slightly (0.4%) from predicted mortality. Under conditions of perfect fit, the Hosmer-Lemeshow test was not influenced by the number of patients in the data set. In situations where there was a slight deviation from perfect fit, the Hosmer-Lemeshow test was sensitive to sample size. For populations of 5,000 patients, 10% of the Hosmer-Lemeshow tests were significant at p < .05, whereas for 10,000 patients 34% of the Hosmer-Lemeshow tests were significant at p < .05. When the number of patients matched contemporary studies (i.e., 50,000 patients), the Hosmer-Lemeshow test was statistically significant in 100% of the models. CONCLUSIONS:Caution should be used in interpreting the calibration of predictive models developed using a smaller data set when applied to larger numbers of patients. A significant Hosmer-Lemeshow test does not necessarily mean that a predictive model is not useful or suspect. While decisions concerning a mortality modelʼs suitability should include the Hosmer-Lemeshow test, additional information needs to be taken into consideration. This includes the overall number of patients, the observed and predicted probabilities within each decile, and adjunct measures of model calibration.
ISSN:0090-3493
1530-0293
DOI:10.1097/01.CCM.0000275267.64078.B0