However, as evaluators when asked what the best evaluation design is for a specific intervention off the bat, you’re likely to have the ambiguous response “it depends”. Because interventions don’t work like neat and tidy laboratory experiments – and one can argue that they shouldn’t give the problems they address. Interventions should be bold and innovative and conceptualised to perform a specific function. Although this is great for social and human development it can be challenging for evaluators.
In a landscape of interventions of all shapes and sizes, evaluators are presented with the task of being an educator, advocate, technician, and sometimes even a magician. At the core of an evaluator’s response to an evaluation should be ‘what is the purpose of this evaluation’. More often than not, commissioners of evaluations are interested in outcomes and impact, but as a result of various factors (such as programme design at conceptualisation and the time at which the evaluation is commissioned), an RCT becomes unfeasible. It is at this point that evaluators need to do the best they can with what they have. And using the analogy of travelling from point A to point B, the best possible vehicle is not the Rolls Royce envisioned, but rather a dirt bike.
Some work still needs to be done in shaking off the stigma of not producing an evaluation designed using the often elusive ‘gold standard’ in the hope that what will become the ‘gold standard’ of evaluation will be what is fit-for-purpose.
No comments yet.