Performance indicators: good, bad, and ugly

A striking feature of UK public services in the 1990s was the rise of performance monitoring (PM), which records, analyses and publishes data in order to give the public a better idea of how Government policies change the public services and to improve their effectiveness. PM done well is broadly pr...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of the Royal Statistical Society. Series A, Statistics in society Statistics in society, 2005-01, Vol.168 (1), p.1-27
Hauptverfasser: Bird, Sheila M., Sir David, Cox, Farewell, Vern T., Harvey, Goldstein, Tim, Holt, Peter C., Smith
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:A striking feature of UK public services in the 1990s was the rise of performance monitoring (PM), which records, analyses and publishes data in order to give the public a better idea of how Government policies change the public services and to improve their effectiveness. PM done well is broadly productive for those concerned. Done badly, it can be very costly and not merely ineffective but harmful and indeed destructive. Performance indicators (PIs) for the public services have typically been designed to assess the impact of Government policies on those services, or to identify well performing or under-performing institutions and public servants. PIs' third role, which is the public accountability of Ministers for their stewardship of the public services, deserves equal recognition. Hence, Government is both monitoring the public services and being monitored by PIs. Especially because of the Government's dual role, PM must be done with integrity and shielded from undue political influence, in the way that National Statistics are shielded. It is in everyone's interest that Ministers, Parliament, the professions, practitioners and the wider public can have confidence in the PM process, and find the conclusions from it convincing. Before introducing PM in any public service, a PM protocol should be written. This is an orderly record not only of decisions made but also of the reasoning or calculations that led to those decisions. A PM protocol should cover objectives, design considerations and the definition of PIs, sampling versus complete enumeration, the information to be collected about context, the likely perverse behaviours or side-effects that might be induced as a reaction to the monitoring process, and also the practicalities of implementation. Procedures for data collection, analysis, presentation of uncertainty and adjustment for context, together with dissemination rules, should be explicitly defined and reflect good statistical practice. Because of their usually tentative nature, PIs should be seen as 'screening devices' and not overinterpreted. If quantitative performance targets are to be set, they need to have a sound basis, take account of prior (and emerging) knowledge about key sources of variation, and be integral to the PM design. Aspirational targets have a distinctive role, but one which is largely irrelevant in the design of a PM procedure; motivational targets which are not rationally based may demoralize and distort. Anticipated and
ISSN:0964-1998
1467-985X
DOI:10.1111/j.1467-985X.2004.00333.x