Ray Struyk has worked on think tanks for quite some time -and has great insights into what makes them work. But I wonder if these insights can (or should) be quantified. Following RAPID’s work on the factors affecting the uptake of research (context, evidence and links) he and Sammuel Haddaway attempt to find some statistically significant relationships between these and policy research organisations’ (PROs) success. This is not an easy job so the first thing I must do is congratulate them for the attempt.
In general the paper finds what RAPID’s work found all those years ago: context matters, interpersonal relations matter (if they know you you are more likely to be influential), etc. For the authors, though the main conclusion is that it is possible to:
establish significant statistical relationships between measures of PRO performance, attributes, and conditions in the policy markets where they operate.
The key phrase here is IN THE MARKETS WHERE THEY OPERATE. Here are some comments that I would make to the paper and its findings:
In the study, PROs include think tanks (but also advocacy NGOs, research NGOs, and other types of civil society organisations). I am not sure which ones they are but I would have focused on one type of organisation. Even within a single context, success and how to achieve it cannot be assumed to be the same for all. These organisations play very different roles.
The CEL framework (context, evidence and links) also includes external influences -which can be a catch-all category, yes, but I think that in this case it would have been useful to include. The context category is much more immediate to the organisation. It describes the context that affects it directly in what it is trying to do -in fact, the framework is really relevant for specific policy influencing interventions and not for PROs performance in general. In any case, the external environment includes factors such as NGO legislation, labour and tax laws, donor behaviours (donors are absent in the analysis -in fact, they are only mentioned once), cultural norms (that are extremely important in explaining how organisations communicate and engage with others -especially how researchers engage with policymakers).
I am also not sure how significant the global sample can be. Policy is a local matter -the sample then ought to be large enough for each country (and even then we may find some problems). A sample of a few organisations from countries as different as India, Argentina and Uganda cannot really tell us anything that is not as general (and catch-all) as the RAPID CEL framework that underpins this study. And this is a critique to RAPID too: working at the global level we could only come up with very broad and general conclusions; common sense conclusions.
As they recognise themselves, the sample used is questionable. The PROs chose the people to be surveyed, and as a consequence, the questions used to construct the dependent variables seem doomed to fail: the first three are obviously going to be positive (the respondents are likely to be friends of the organisation) and the last two are obviously going to be negative (budget influence and accountability are not things that can be attributed to a single PRO). And there is also the matter of the choice of ‘success’: direct influence. As I have said before, success for a think tank is not just direct influence.
On the independent variables I have more positive things to say: a clear mission is critical and often overlooked; focus is equally important and organisations that specialise on a few issues or methods are more likely to gain a reputation than those who attempt to cover a bit of everything; professional communication teams can help to improve the image of the organisation -but they may also risk its position of neutrality or independence (depending on the political context); and research quality is critical.
However, these indicators give too much emphasis, in my view, to communication questions -it assumes, I must infer, that success is equated with communication outcomes. Is influence as a consequence of long term, credible and robust research not possible? And the indicators that focus on the environment are not really the type of indicators that ought to hep explain think tank formation and development (or PRO). Work by think tank scholars tends to emphasise the importance of factors such as:
- The institutionalisation and porosity of political parties -including the existence of party think tanks
- The porosity of the civil service and the government
- The degree of competition in the policymaking space (how many public and private agents are involved)
- The strength of the tertiary education -both as competitor and supplier of ideas and staff
- The value of knowledge in society
- The existence of pressure or interest groups
- The legal frameworks governing think tanks (PROs) -NGO legislation, tax legislation, labour legislation, etc.
According to their (and RAPID’s) list of variables (which mostly look at freedom-of kind of indicators) China should not have any think tanks -or they should not be very good at influencing policy. This is certainly not true.
An independent variable that does not receive enough attention is funding. It does not appear to be of great importance in determining success in this study but it is consistently the number one challenge that think tanks express to be facing. Not just having funds matters. What kind of funds and what for is critically important: If, as Andrew Rich says, substantive influence depends on being able to influence the definition of the problem then the organisations need long term and flexible funding (core funding) to make the necessary investments and strategic choices. Project funding (attached to lengthy contracts) won’t do.
But all is not lost. I think that any attempt to assess the influence of these type of organisations will have to consider ways of working with large data sets. I’ve suggested that many projects be put together to construct these so that it may be possible to have hundreds or thousands of instances of influencing attempts -by different means, in difference sectors, contexts, by different organisations, etc. But the keyword here is large. The samples need to be large enough to be of significant relevance in each of these cases.
However, I am not suggesting that this ought to be used to measure success but to try to draw more nuanced lessons that will be useful, most of all, for the organisations themselves.
Replicating this for India or for Argentina (where there are quite a few PROs) may be a much better option than attempting a few across the world. So the India and Uganda samples may be more interesting -but without enough organisations in each contradictions are likely to emerge. We may be able to move beyond the generalities of the RAPID framework that, for all its use, it was never meant to provide anything else than an analytical and practical aid. Detail is not its strength.