Three lessons on evaluation: why thinking about usefulness, communication and learning matters

16 September 2021

The article by Ajoy Datta from OTT covering lessons on devising, revising and evaluating strategies for funding organisations inspired me to pull together three critical lessons that I want to share in this area:

  1. Scoping what evidence is needed, for whom and by when;
  2. The forgotten ‘L’ in MEL; and
  3. It ain’t what you say, it’s the way that you say it – and that’s what gets results!

1. Scoping what evidence is needed, for whom and by when

A tight boundary/scope is vital and upfront work to get this tight is the foundation of any good evaluation. Feeding this into terms of reference for evaluation commissions and testing this critically with evaluators during the inception phase of evaluations cannot be underestimated.

What evidence?

Too often, a definition of what evidence is needed in the form of evaluation questions is either too vague or resembles a loooong, unprioritised shopping list. Consultation and challenge is required to boil evidence requests down to a short list of key evaluation questions based on what is really required (and is affordable).

Linked to this is the need to conduct evidence mapping as a critical first step. Often this is just as, or in some cases more, useful than the actual generation of new evidence. The objective here should be to ensure that, as much as possible, evaluations generate new evidence and avoid evidence duplication.

Caution is required to ensure that evaluators don’t just focus on low-hanging fruit, which often adds little additional value, and avoid tackling the difficult but critical evidence gaps that significantly aid learning and performance.

For whom?

Different stakeholders have very different wants. These wants should be teased out before the work starts and made clear to the evaluator. Otherwise it can lead to poor or limited take up of the evaluation.

Some stakeholders are interested in delivery issues, some in a specific thematic issue or geographical region, while others are more interested in strategic outcomes. Understanding what each stakeholder wants is critical, including how these wants change as stakeholders alter views over time.

There is often too much of a focus on the perceived ‘important’ stakeholder, which is usually the funder. Neglecting the needs of other key stakeholders means that the potential impact of the evaluation is much more limited.

By when?

The timing of evidence delivery can make all the difference between something sitting on a shelf and levering change on major design, funding and policy decisions.

It is remarkable how often evaluations are produced in a vacuum in terms of influencing decisions.

I would even go as far as to say that on many occasions it is better to deliver a more limited evidence product that hits a key decision point then to prioritise a really robust evidence product that feeds into the void.

In certain instances, it may require evaluation timetables to be adapted (yes, we also need to be adaptive!) if the dates of decision points change.

2. The forgotten ‘L’ in MEL

While monitoring and evaluation most often have a quite clear agenda in our work, learning is often neglected.

Learning routinely lacks clear approach, specific roles and responsibilities and sufficient capacity (in terms of people, funding and time).

For the learning to really gain traction it needs to be internal to the implementing organisation. However, this requires a recognition that qualified staff need to be in place – especially senior staff who can sit on and challenge management boards. It also requires organisations to carve out time to discuss learning (including that linked to evaluative products) and make course corrections, which is not easy for very busy organisations.

There is currently a lot of talk about adaptive management that sounds good in theory but is very time consuming in practice. A lot of commitment and capacity is required to do this. Closing the short-term loops on learning and adapting is tough work and there is a need to be honest on how much appetite there is to do this.

3. It ain’t what you say, it’s the way that you say it – and that’s what gets results!

We all need to become more obsessed on how to communicate evidence. This has a major impact on whether the evidence is used or not. The production of telephone directory reports and impenetrable academic text should drive us all bonkers.

There is a need to think through from the start (not at the very end) the suite of communication products that need to be developed for each stakeholder and to allocate adequate budget and expertise. How to communicate with key stakeholders during an evaluation is also important – for example, more in-person sense-making workshops and less long documents of written text to review.

Products should be developed in various form – be it videos, visualisations or short briefs (tailored for each key stakeholder). Creativity is needed on how to summarise complex information – for example, databases where users can filter information by areas of interest.

Clarity and brevity are also so important for some stakeholders, such as senior policymakers, who will have a mere 10–20 minutes to digest information.

Finally, every report should plainly state whether there is enough evidence to make a judgement and the extent to which an evaluation question holds or otherwise (using approaches such as traffic light ratings). If there isn’t enough evidence to make a judgement, or if the judgement is mixed or only holds under certain circumstances, this is also useful and shouldn’t be hidden by evaluators.

It would be really interesting to hear from others on what light bulb moments they have had. Closing the learning loop for practitioners on the ‘doing’ is just as important as the learning loop for the areas we evaluate.