In 2012, we received an invitation by Oxfam Novib – the NGO that works on a more just world without poverty – to support them in developing a new method for running quality reviews of the work done by country teams in the Global South. In our work together since then, we have enjoyed the excitement of what we feel is a fundamentally new approach to assessing quality. The method has been used around the world now, and in 2013/2014, we had the chance to further improve the concept and sharpen our thinking while evaluating Oxfam’s “My Rights, My Voice” program.
Playing with the rules
Together with the responsible people at Oxfam, we started out by setting ourselves a playful challenge: how would an assessment look that broke all the – in our perception – flawed rules of auditing?
- Instead of trying to find out where people made mistakes and delivered low quality, could we focus on the secret of their success?
- Instead of looking at documents, policies and procedures, could we look at the work as it was executed in daily life?
- Instead of positioning ourselves as the outsiders who claim some sort of objectivity as experts, could we involve everyone in a process of evaluating themselves, and each other?
- Instead of first gathering all kinds of information, and then making sense of it in a report, could we make that an iterative cycle where information gathering and sense making are an integrated, inter-subjective process?
At the end of the day, we wanted to make an assessment after which the people being assessed would want the assessors to come back.
The secret of the success
We decided to assume that the people that do the work in the field have the intention to deliver high-quality work. If we want to help them to learn to deliver more quality, we can probably better focus on explaining together why certain things work well. Which is why we looked at the ‘secret strength’ of the country team or the programme we were auditing. That focus had various effects: Firstly, this respectful focus stimulated energy and openness in the teams whose work was evaluated. Secondly, looking at the ‘positive deviation’ – the one example when something worked among a lot of other cases where the same idea did not work – delivered valuable insights into the mechanisms of the work in the field. Thirdly, because we were building on things that were already done by people, it was much easier for them to actually implement the insights into other situations as well.
Looking at real work
We mainly used action-research methods – simply joining people in their daily work, interviewing them in the car between appointments, watching what they were doing, talking to people they worked with, just after an intervention. In this way, we created a very active, flexible approach, that took little time from the professionals being evaluated, as it was the whole idea that they would follow their normal schedule.
After some scepticism from the teams evaluated, we typically experienced that they started to like taking along someone who would look with an appreciative eye at their work. We have also noted that experiencing the interventions and daily work – as unrepresentative as the sample may be – delivered very deep insights into the mechanisms of the projects and programmes under evaluation.
A peer-learning process
As much as possible, we have involved people from other places doing similar work to evaluate colleagues – with the idea that in this way, the learning spreads faster than if people in similar environments just read a final report put together by some outside consultants. For example, with the evaluation of the “My Rights, My Voice” programme, programme officers from two to four participating countries would be part of the evaluation team in another country.
We have learned that this approach ensures a much wider conversation among all people involved, and that some very practical ideas are easily translated and taken over by others. On the other hand, we have also seen that the difference in contexts makes it difficult to extract learning for peers – the insights come on a level of abstraction beyond the specific context, which requires translation back to the own reality. Nevertheless, there were some deep insights that came from being immersed in a different context for a while. As one of the reviewers noted: “The experience of this week has profoundly changed my idea of poverty and makes me look at the poverty in my own country in a different way.”
Iterative sense making
In each of the reviews, we have created short-iterative cycles. For example, we would hold a talk show every evening with everyone involved to discuss what the ‘reviewers’ had seen that day, and which sense they made of that. Other people could then challenge the interpretation, think along, so that the research team would then design what needed to be observed and researched the next day in order to sharpen the conclusions further.
In this way we were co-creating the ‘results’ from the assessment. We have also worked with putting insights on a protected internet-site every evening, so that a wider audience could think along as the process was going on.
The talk shows created a lot of fun and energy. We have also received the feedback that through this iterative process, the quality of the conclusions was improved considerably – in the middle of the process, we would have generated the insights that were considered ‘normal’ for a quality assessment. At the same time, we have also seen that it is not always easy to run and execute the iterative feedback sessions in whichever form. In addition, finding a form that facilitates people from very diverse (cultural) backgrounds to talk about research results remains a challenge.
Questions for the future
So did we succeed in designing an assessment process that makes people want us to come back? We think that we did. A few months back we were affirmed in this thought when we coincidentally met a country director who enthusiastically told us about the quality review in her country and how it had helped her and her team to move forward – the process that was initially developed by us in 2012. We are enthusiastic about the potential of these redefined quality assessments. Of course, not all the positive effects described above are observable with everyone in all locations. At the same time, it has been very rewarding to see how many of those being reviewed completely turned around their attitude towards the process – from being sceptical and reserved, to stepping in and inviting others to watch their work. From just letting an assessment pass over, to actively implementing insights from day 2 on day 3.
Whereas we think that we have built a very productive, new fundament for quality assessments, some challenges and questions still remain that we are curious to work on further:
- How do we deal with the fact that often our ‘new style’ review process takes place in a context of a variety of other, much more control-focused audits? The effects of the quality review at times feel like a drop of water in an ocean, and these opposing approaches at times undermine the positive effects of the quality review process on the attitudes of people towards quality.
- How can we create a broader, more complete image of the quality of the programs? The review process now generates valuable and new insights that the regular processes cannot generate, but they are often not sufficient to get a complete picture.
- How can we combine the insights from several of these processes horizontally to draw conclusions more widely?
- How can we make the philosophy and approach of the review something that is more permanent, supporting people in delivering high quality? We now observe that the review often is a singular exercise, which does not find its continuation in the philosophy of how the work takes place every day.
- How to create a process of sense making on a daily base that involves people of different backgrounds and ability to think abstractly?
- How can we involve other stakeholders, such as management and beneficiaries more actively into this process?