Evaluating participatory decision processes: Which methods inform reflective practice?
•We focus on evaluation approaches for participatory decision processes.•We investigate how evaluations can contribute to improving intervention practices.•We compare case findings from a national survey to results of in-depth interviews.•Surveys failed to surface the role of context information whi...
Gespeichert in:
Veröffentlicht in: | Evaluation and program planning 2014-02, Vol.42, p.11-20 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | •We focus on evaluation approaches for participatory decision processes.•We investigate how evaluations can contribute to improving intervention practices.•We compare case findings from a national survey to results of in-depth interviews.•Surveys failed to surface the role of context information while the interviews did.•Interview insights inform practice more than surveys but do not generalize readily.
Evaluating participatory decision processes serves two key purposes: validating the usefulness of specific interventions for stakeholders, interveners and funders of conflict management processes, and improving practice. However, evaluation design remains challenging, partly because when attempting to serve both purposes we may end up serving neither well. In fact, the better we respond to one, the less we may satisfy the other. Evaluations tend to focus on endogenous factors (e.g., stakeholder selection, BATNAs, mutually beneficial tradeoffs, quality of the intervention, etc.), because we believe that the success of participatory decision processes hinges on them, and they also seem to lend themselves to caeteris paribus statistical comparisons across cases. We argue that context matters too and that contextual differences among specific cases are meaningful enough to undermine conclusions derived solely from comparisons of process-endogenous factors implicitly rooted in the caeteris paribus assumption. We illustrate this argument with an environmental mediation case. We compare data collected about it through surveys geared toward comparability across cases to information elicited through in-depth interviews geared toward case specifics. The surveys, designed by the U.S. Institute of Environmental Conflict Resolution, feed a database of environmental conflicts that can help make the (statistical) case for intervention in environmental conflict management. Our interviews elicit case details – including context – that enable interveners to link context specifics and intervention actions to outcomes. We argue that neither approach can “serve both masters.” |
---|---|
ISSN: | 0149-7189 1873-7870 |
DOI: | 10.1016/j.evalprogplan.2013.08.002 |