on evaluating think tanks (for evaluative thinking)

28 October 2010

This is the million dollar question: If on the main objectives of think tanks is to influence policy change, how can they prove they have, in fact, done it?

This question is of particular relevance to think tanks in developing countries (and international development think tanks in developed countries) which are under increasing pressure to show the impact of their work. In other words, they have to evaluate whether the funds they receive from foundations and the Aid budgets of northern donors have been translated into clear policy changes and development outcomes (poverty reduction, better health, higher access to basic services, etc.).

I’ve been working on this issue since mid 2005. In the process I have explored a number of approaches, methods and tools -most of which you can find in Rick Davies’ blog. Another useful source of tools may be found here in Ingie Hovland’s paper on monitoring and evaluating the impact of policy research.

In this sea of tools, over and over again, I found that, as Fred Carden said at an ODI event, the problem is one of data -not of methodology.  In policy influence interventions we lack the data necessary to study and demonstrate (or test) cause and effect relations. So what can we do?

The first challenge, to assess the impact on policies, seems to have been accepted by think tanks without much questioning -even though our capacity to do this is extremely limited. After all, think tanks’ main function is to promote research based policy change; and many are now investing in theory of change-based approaches as well as other more innovative tools to monitor uptake, to assess their contribution to policy changes.

Reducing poverty, however, is another matter altogether. Few think tanks are ever in contact with the poor except when they are gathering evidence. Some may complain that this is paternalistic or neo-colonial (or just plain lazy) and that the poor are, again, being used by researchers; but think tanks, let’s be honest, do not exist to help the poor directly –and I doubt that they are very good at it.  There are others who have adopted that responsibility: the public sector, NGOs, social enterprises, volunteers, etc. I won’t address this issue in this post.

Assessing the influence on policy: as futile as defining think tanks?

How easy is the assessment of policy influence? Think tanks operate in that murky space where decisions are made, their exact location changing in different contexts. They attempt to influence circumstances under which decisions are made, how they are made, those who make them, and their outcomes. To do this they play, whether they want to accept it or not, the game of politics. This is sometimes clearer in developed countries: IPPR has been realigning itself from the New Labour to a more central centre in the last year in preparation of this year’s election outcome. The Heritage Foundation has gone from legitimising George Bush’s policies to de-legitimising Barak Obama’s.

However, unlike other political actors, think tanks have difficult goals to measure: political parties and politicians can count their votes, the policies they pass or implement, and maybe even the resources they are able to mobilise. The media can count the number of readers and their advertising income. Lobbies can use specific articles in amended legislations or the approval or rejection of specific policies to measure their impact –and calculate how much money in their clients’ back accounts exactly this translates into.

Furthermore, unlike these other actors, think tanks don’t have a single busines model that is followed by all: political parties run campaigns to get elected, the media publishes newspapers or broadcasts, and lobbies lobby.

Think tanks do a bit of everything to pursue their core functions: research, publications, campaigns, pilots, media, advice, lobby, negotiation, partnerships, etc. But not the same combination or in the same way.

And what can they measure? The number of times a document is downloaded alone is irrelevant to the question of substantive influence. They could, of course, find out who is downloading the document and try to determine how they use it; but few think tanks are alone in their policy space and the cases where it is possible to trace a clear link between research and policy are rare –and often, in my opinion, made up. In any case, the user of think tanks’ research has the power to decide if he or she wants to be influenced or not –so really, in the end, it is them, and them alone, who influence policy.

Even if we could find out, in detail, how a particular think tank was able to contribute to influencing a particular policy (or not), or if we could detail how a policy maker made up his or her mind, what does this mean? In research terms this is just an anecdote or, if done properly, a good case study. Pile them up and the most we’ll get is a collection of case studies that describe things that worked and things that did not work but nothing that tells us, without a doubt, what will work in the future. Nor will it tell us, and this is what policy research donors want, whether supporting that particular think tank was good value for money.

The returns on the investment on a think tank are quite hard to measure; both at the project and organisational levels

To begin with, the value of a research project depends on a great deal of factors, including: the level of effective demand for the particular knowledge produced, the availability of other sources that deal with the same issues (competition), whether it fits well with or adds value to other initiatives or knowledge, the value that the audience (or client) awards to knowledge in their work, the value that other policy actors (like the media) award to knowledge, the reputation or good-will of the think tank (or the researchers), the opportunity cost for the think tank of that particular project, the opportunity costs for the users (of accessing or of using other sources or not using research at all), the importance of the issue that the project addresses, etc.

Even if we could measure this, we would still have to consider the impact of the project’s activities and outputs on policy. Here we would have to calculate the contribution of the think tank on a particular policy change –is it possible to attribute 10%, 20% or 75% to one particular actor? Is it even possible to determine when a policy process starts and another ends? Then we would have to estimate how much of the think tank’s influence is due to that particular project; as it is perfectly possible that the think tank would have been able to influence a policy without the project because it already had a fantastic communication team, because the think tank’s networks go right to the top of the country’s policy elite, or because it has built, over the years, a great reputation –and the project was a marginal addition to its portfolio. The project’s proportion of the overall think tank’s budget or effort may be a good indicator of its contribution. But how to measure the role of the think tank’s reputation?

However, how useful is this? It may tell us that for a particular initiative the think tank spent little money but had a big impact. It is also possible that the think tank spent lots and had a big impact; or that it spent little money and had a small impact; or lots of money and no impact at all. But this means nothing unless we qualify these findings.

For example, we might find that a think tank focusing on civil service reform in developing countries spent lots of money (for many years) and had little or no impact.  But is this enough to judge the value of the investment? Was this not an important initiative? Could we argue that its objectives were far more difficult than attempting to eliminate user fees in countries were the governments were already against them?

To be really useful these assessments of their influence would require the think tanks to think critically about these cases:  first they would have to be contextualised and only then lessons may be drawn from them in relation to the think tanks’ contexts, objectives, strategies and structure.

This goes beyond the case study approach which has been the focus of most efforts to draw lessons for think tanks based on individual research-into-policy cases which predictably end up with the same old recommendations: carry out high quality research, develop networks and alliances, invest in communications, and understand and work as close as possible with the politics of policies. To match these recommendations there is an ever increasing menu of tools and tactical choices -that can be found browsing through the Evidence based Policy in Development Network, for instance.

Larger numbers

To make sure that the lessons coming out of these evaluations and assessments are useful for the wider community of think tanks and their supporters, we would have to somehow aggregate all the cases and calculations into a sample large enough to offer anything of minimum statistical significance and that would allow us to assert things like: ‘university based think tanks tend to spend more than others on research capacity but also focus on more fundamental policy changes, while NGO based think tanks spend less on in-house academic research but target changes in the implementation of policies’; and ‘successful economic policy research centres require an investment in communications of about 40% of their research budget’ while ‘successful health policy research centres have a communications budget closer to 25%’.

Now, this would be useful and help solve the big strategic questions that think tanks face: how much to spend on research as opposed to communications? How long should we plan for a particular type of objective? What is the likelihood of success of one strategy or another? How does our business model affect our strategy?

But this is not something a single think tank could do; certainly not one already strapped for cash and focusing on a rather limited policy space. nor does it answer donors’ value for money question.

Evaluative thinking approach

So, until someone deals with this, what should think tanks do about the evaluation of their influence?

On March 2010, and then again on September 2010, the Think Tank Initiative held a Learning Event on monitoring and evaluation for its African and Latin American grantees. The main objective of the events was to embed evaluative thinking in the way the think tanks work. Evaluative thinking is, in fact, critical thinking, making reasoned judgements and, most importantly, to have the ability to think about how and why we do what we do.

An evaluative thinking approach hence is not just a mechanism to use new M&E tools or indicators. Think tanks are thinking evaluatively when they explore the reasons why they make certain strategic choices or undertake a particular activity or tactic; or question why some aspects of their strategy work better than others; or seek lessons from their experience and from others to better understand their own work and be able to quickly adapt to changing and uncetain contexts.

Therefore, when it comes to evaluating think tanks, what matters should be whether the evaluation findings are necessary, of the right quality and, most importantly,  if they help think tanks to think critically and strategically. Everything else, I consider, is an unnecessary distraction. (In other words, the million dollar question is not really worth that much.)

In my opinion then the Think Tank Initiative’s response to the demand for monitoring and evaluation support was the right one (and I think it is safe to assume that it has benefited from the influence of IDRC’s own Evaluation Unit).  But it isn’t enough. Now, critical thinking needs to be embedded in the organisations’ planning, monitoring and learning systems and processes; and this will require, in some cases, significant (and very serious) investments in think tank’s human capital.