We walked into an airy boardroom in a leafy Pretoria suburb one morning and decided: “today, we will do it!” The previous week in Cape Town, it hadn’t felt right to drop the D-bomb among a stern group of economists.
This time, it was different. We followed through with the plan – when the discussion started gaining momentum, we stunned them with the question: “Can you remember an occasion when you were delighted by one or more of the stakeholders in this project?”
After a moment of shocked silence and some mutterings of “must we repeat ourselves?” and other not-too-favourable comments, we were surprised to see their faces light up. We could feel the mood in the room lift, get lighter, and gain energy.
Is delight allowed in evaluation? Evaluation is a serious business, and perhaps we should talk about pragmatism and social constructivism? Not yet. But don’t run away. First, let us tell you what happened.
Have you ever seen a “delighted” economist?
We were doing an evaluation on an economic development programme, and we dropped this question into a focus group to experiment with what happens when an appreciative approach is introduced.
Our small test worked better than expected. We were not prepared for the extent to which they were actually genuinely delighted: one story led to more stories of delight.
As the lively discussion gained momentum, we discovered a hugely significant success story that showed how the project directly influenced a key government department to change the way they work. Their products and services were now aligned to the needs of a major group of clients, which in turn enabled the clients to productively use what the department produces and delivers.
This is actual, tangible, impact that is directly attributable to the project — the holy grail of evaluation. Can it get any better than that?
Why was this a surprise?
In the evaluation community Appreciative Inquiry (AI) is at best not widely accepted, and is sometimes even frowned upon, or thought of as some kind of gimmick. But then, if we keep on doing what we have been doing, we will keep on getting what we have always been getting.
What evaluators have been doing for the past few decades is to focus on the judgment aspect of evaluation. What distinguishes evaluation from other applied social research is that it has to make a judgment on the merit or worth of programmes and projects.
Many different approaches
Evaluators that fall within the Methods Branch and the Positivist Paradigm distance themselves from what they are evaluating, and uses scientific methods such as empirical observations and prioritizing of experimental designs in the form of randomized control trials (RCT). It is safe to say that anyone who has attempted to do an evaluation will know that it is not always easy to meet the conditions required for RCTs.
Over time, other perspectives developed on evaluation and the relationship of the evaluator to what is being evaluated. For example, the Use Branch of evaluation and the Pragmatic Paradigm was less focused on the perfect evaluation design, and more on collecting data that would lead to recommendations that could be used.
Then there is the Values Branch and Constructivist Paradigm that wants to identify multiple values and perspectives through qualitative methods. Evaluators in this branch do not buy into the methods branch’s view of the evaluator being an impartial observer. The values branch wants the evaluator to get closer to what is being evaluated, with evaluators being personally involved with targeted communities.
The development of the above perspectives took place alongside the pushback in social sciences against efforts to imitate the objectivity of the natural sciences – some theorists were not convinced that phenomena in the social world can be studied and measured in the same way it is done in the natural sciences. Sometimes things in the social world are much more complex than in the natural sciences.
Planting beans and watching them grow
If you do an experiment with planting beans, like most of us did when we were in primary school, you will observe them every day, measure them, take pictures of them, and you may even talk to them (just don’t tell anybody!). But that is not going to make any difference to how they grow or don’t grow. The growth of the beans will be influenced by the quality of the soil and nutrition they get, and the amount of water, light and warmth available.
As far as we know, staring at beans and talking to them will not make them grow faster. However, we do know that staring at humans and talking to them does affect them. They may not grow faster because you watch them and talk to them, but they will behave differently. This we know from the now famous Hawthorne experiments that were conducted in the 1920s and 1930s in America.
Researchers wanted to measure how variables like lighting, time for breaks, and other factors influenced productivity. They were surprised by what they learnt: “…the workers did not respond to the treatment, but to the additional attention they received from being part of the experiment…” (Neuman: 2003). Although some of the results of the Hawthorne experiments were later challenged, there is still widespread acceptance that the very act of observation, or administering questionnaires, or conducting interviews – in other words, attention paid to people when we are conducting research – do have some influence on the participants and possibly the phenomenon being studied.
The Hawthorne experiments have always reminded us that we must take care to design our research in such a way that it does not unduly disturb or influence the reality that we are researching. If we plan an observation, we ponder on whether we will observe from a neutral distance, or if we will be a participatory observer. When we design questionnaires, we know that we should not ask leading questions. That is what we have been taught to do, and most of the time it makes sense to follow these rules.
Achieving the change we want to see
Evaluators have, for a long time, lamented the lack of use of their precious and often expensive evaluation reports. When some evaluators started implementing participatory approaches and utility-focused evaluation, there were also some changes in use of the reports.
Evaluators moved closer to the projects and programmes they were evaluating. In some cases they became involved right from the start, or even better, from concept phase to ensure that monitoring and evaluation is not a last-minute add-on, and in this process they work closely with project staff. With these approaches, evaluation became more relevant and useful, and when evaluation is useful, it has much greater potential to lead to change.
Is change not what evaluation should ultimately achieve? AI was developed as an organizational change methodology, but has been adapted to be used in evaluations. It’s placed on the “Use Branch” of the Evaluation Theory Tree, but I would like to think that it also resonates with social constructivism.
What strikes me about AI is that although its proponents are mentioned in evaluation literature and scholarly publications, many in the evaluation fraternity still find it somewhat outlandish and far-fetched. But is it? Perhaps the idea that our questions have the power to shape reality may be a scary thought, but one worth exploring.
EVALUATION THEORY TREE
Problem trees grow
The underlying philosophy for AI is that what we focus our attention on in the social world will grow and develop. If we focus on the positive, the positive will grow and multiply, but if we focus on the negative, that will thrive instead. This means that if we follow a problem-centered approach, we get stuck in the misfortune of the problem. The more we try to fix it, the more it grows.
Well, let’s be fair – sometimes problem-solving works, but how many problems did development initiatives (mostly based on problem or deficit analysis) manage to solve over the past 50 or more years? Some progress has definitely been made, and the massive effort put into the human development field deserves acknowledgement. But how do we take the game to the proverbial next level?
What can we lose by trying something different? Consider for one moment that the way we approach our evaluations, frame our questions, and engage with projects may perhaps have the potential to be a change process.
The answer is not rocket science
Does the answer lie in neuro-SCIENCE? Through a remarkable body of research, neuroscience has established that we affect people either positively or negatively by the way in which we engage with them and the way they perceive us (also as evaluators). Prominent neuroscientist Evan Gordon (2000) reminds us that the “avoid danger and maximize reward” principle is an over-arching organizing principle in the brain, and translates in the approach-avoid response.
When our brain tags a stimulus as “good,” we engage in the stimulus (approach), and when our brain tags a stimulus as “bad,” we will disengage from it (avoid). Translated into the evaluation space, this means that if our evaluation processes is perceived as threatening by stakeholders, they will disengage.
We also know that when people are “seen, heard and loved”, the associated surge in brain chemicals enable them to think better and creatively (connecting behavior, or approach). Conversely, when people feel that they are criticized, judged and dismissed, they their brains literally shut down, as they go into flight mode (avoiding behavior, or disengagement). Consider for a moment what this means for the evaluator’s interaction with the evaluand.
Why we should be taking some tips from Olympic athletes
There is a wealth of evidence that shows the power of our words. When athletes use positive imaging and words to tap into their potential to perform at their best, we think it is awesome. Why then, do we hesitate to use the same approach to propel our projects and organisations to perform at their best?
We can still use metrics, ongoing progress monitoring and impact assessment, but the question is if we, like the athletes, can incorporate the positive into our equation. Can we as evaluators find a way of using generative questions to tap into what works, so that we can learn from it and amplify it?
The power of questions is aptly described by Browne (2008) who pointed out that every question has a direction, and because of the direction of the question it either carries a generative or a destructive energy.
AI is interested in generative questions – those that “build a bridge” or “turn on a light”. The rationale for AI is that if we pose provocative questions that discover the positive core of a project or programme, we can multiply and magnify what works. By doing this tracking and fanning, we focus our energy on what works, and this creates the energy for the programme to grow in that positive direction.
Seven beliefs about human systems
AI is underpinned by a relational and conversational approach to human systems. This approach pays attention to the patterns in the system and the expressive relationship between the elements of the system. Human systems are living systems, and in these systems patterns of belief; communication; action and reaction; sense-making and emotion; are important – these are the things that “give life” to the system. Further, living human systems have the possibility to self-renew and grow – that is, the possibility to change.
If we can use AI as a change approach in evaluation, the long-lamented disinterest of stakeholders, non-use of evaluation reports and lack of change as a result of evaluation recommendations could well be as last-millennium as the telegraph.
AI is worthy of exploring. Does it have the potential to turn negative aspects highlighted in the Hawthorne experiments into positives? If you are still in doubt, remember that it is also in line with neuroscience, and it is SCIENCE, after all. Also, like it or not, it is there, on the evaluation theory tree, which means that it forms part of the body of knowledge about evaluation branches and paradigms.
Our experience with the D-bomb and the economists made us realise that there is something to explore here, and a lot to learn.
No comments yet.