Impact evaluations, research, analysis… what is the difference?

22 June 2011

When it comes to policy influence, what is unique about impact evaluations in relation to other types of research? Let me explain why I am asking this question. When I was in RAPID (and still) I were asked to help organisations to develop policy influencing strategies. Some times, this help came in the form of a workshop, but other times it was provided over a longer period of time though mentoring and support. Almost every time, the clients would ask for lessons tailored to their own contexts -which ranged from the politics of international donors to local NGOs, or working globally or regionally or nationally, etc. This often meant that they wanted case studies from their region (e.g. Africa) or the sector they were working in (e.g. health).

Now, RAPID does not tend to advice HOW to influence but HOW TO DECIDE how to influence -there is a difference (although the communications team does help with some more practical aspects of this). So we have always expected that the context will be provided by the organisation that we are working with; and that decisions about what specific influencing approaches to follow will be also theirs. This might sound like a cop-out but in fact it is an honest approach: we cannot possibly claim to be experts on every context and sector (we work with organisations all over the world that in turn work in a range of sectors). And in any case, we had to assume that those we worked with knew their context enough -this, we found out, was a fairly naive assumption in some cases.

So to deal with this demand we tried to provide support in a way that would allow the client to present, up front, as much contextual and content knowledge as possible. And to do this, we provided some tools (but this is another matter).

Although the planning process proposed is applicable to all sectors and contexts (except that it may not be possible or necessary to follow all steps or be as detailed in all situations) I accept that influencing in Africa (and in each country) is different than in Latin America -and in health policy it is likely to be different than in education policy; and so on. But it is also different to influence as a research centre as it is to influence as an NGO; and so on. So focusing on context and content issues may be in fact misleading.

Recently, however, we have been asked to tailor-make our planning approach (the RAPID Outcome Mapping Approach) and recommendations on HOW to influence to impact evaluations. Behind this demand is the assumption that policy influencing based on the findings of impact evaluations is different from policy influencing based on the findings of other types of research.

So how different is it to influence from one type of research than from another?

My view is that this question is not relevant -certainly not useful. I will provide my reasons below but let me also ask for your input. If you can demonstrate (or argue, because I am not demonstrating anything) the opposite, please do so; this is an open debate.

To start the debate, let me provide four reasons for my view:

  • Argument not evidence: I have already used this before in this blog but I think it is still a relatively new idea in the research-policy linkages community. Policy (or programme or project -or more broadly, behaviour) does not change because of a single piece of evidence. Change happens because new (or improved) arguments are convincing enough to affect someone’s beliefs, assumptions, premises and actions. These arguments are made up of a number of elements, for instance: evidence (from different sources), appeals (to ideology, values, rights, laws, interests, etc.) and imaginary (metaphors, stories, etc). These elements are put together into an argument. And so, even if the findings of impact evaluations are used, this is unlikely to be the only type of evidence and it is not possible to separate it from the argument as a whole.
  • Credibility is in the eye of the beholder (or ‘any evidence is just evidence’): There is a view that impact evaluations are different from other types of research -that they are the gold standard of evidence. The scientific rigour involved in an impact evaluation, its proponents argue, set it apart from all other methods. This may be true. Impact evaluations may be more reliable than other methods, but when it comes to influencing this only matters if (and only if) the person or people being influenced agree. And if they do, then, if anything, influencing will be easier and therefore there is even less of a need to focus on differences or come up with lots more specific examples.
  • There are few full-time impact evaluators -and impact evaluation centres: While some people and organisations may be specialising on impact evaluations most researchers do a bit of everything. Impact evaluations are just one more type of research they have to carry out on a normal year. And the same is true for the organisations that they work for. As a consequence they do not just communicate impact evaluation findings. Therefore, the idea that they would have or be able to specialise on one particular type of influencing (based on the source of the evidence) does not seem to make much sense.

So, not only are impact evaluation findings tangled up with the findings of other types of evidence and other non-evidence components of a good argument, they are also, whatever their scientific rigour, not necessarily seen as any different (or better) from other types of evidence by those being influenced (although so do). And to top it off, those attempting to influence are not necessarily impact evaluation specialists and therefore cannot possibly develop impact evaluation only based strategies and ‘other sources of evidence’ based strategies to implement separately.

The fourth reason is more fundamental:

  • The hundreds if not thousands of cases gathered by the literature have given us a great deal of lessons (common sense, really) that are relevant to all cases. A lesson does not imply that one should necessarily behave in a particular way, though. For instance, a lesson may be that working with the media can help to open up the debate  -but in many cases opening up the debate may not be desirable. This does not negate the lesson but in this particular case it is just not applicable. The usefulness of impact evaluation specific lessons may be in the actions that they suggest but in helping to communicate with impact evaluators and to convince them of the importance of planning for influence. In other words, the lessons from impact evaluation cases may be used as part of an argument employed on the researchers themselves. But whether they will be useful (more useful than lessons from non-impact evaluation based cases) or not is not relevant. 

What do you think?

  • Is there anything about impact evaluation findings that make influencing strategies (and actions) different from those where impact evaluations have not been used?
  • Is it useful to talk about impact evaluation based influence and non-impact evaluation based influence?
  • Is it worth the effort? Can we not learn from any case?