Research uptake: what is it and can it be measured?
Over the years I have come across many definitions of what research uptake is and is not. I find the term rather confusing and limiting -like most jargon. It creates an image in my mind: of a bunch of researchers working in a dark basement and policymakers looking down on them and picking up their papers and reports.
With more and more funders assessing the uptake of the research they fund it is important, I think, that we form an alternative and more accurate image. Maybe the following discussion points can help. In no particular order:
Research uptake is not always ‘up’: Not all ideas flow ‘upwards’ to ‘policymakers’. For most researchers the most immediate audience is other researchers. Ideas take time to develop and researchers need to share them with their peers first. As they do so, preliminary ideas, findings, research methods, tools, etc. flow in both (or more directions). ‘Uptake’ therefore can very well be ‘sidetake’. And when researchers share with other researchers they often use rather technical terms and formats.
Which should remind us that there are many researchers in policymaking roles; not all research is academic.
By the same token, it could also be ‘downtake’. Much of the research done is directed not at high level political decision-makers but at the public (think of public health information) or practitioners (think of management advice and manuals).
Uptake (or sidetake, or downtake) is unlikely to be about research findings alone: If the findings were all we cared about, then research outputs would not be more than a few paragraphs long. Getting there is as important, if not more important, than the findings. How else could we replicate experiments or check their accuracy? Methods, tools, the data sets collected, the analyses undertaken, etc. matter as well and are subject to uptake.
The process is important too because it help us to maintain the quality of the conversation between the different participants of any policy process. Without policymakers understanding where ideas come from we risk them becoming functional idiots (see Dumbing Down the Audience for a reflection on how think tanks may be contributing to this.)
Replication is uptake too (and so is inspiration): There is also an element of inter-generational transfer of skills that must be taken into account when we consider research uptake. Much of the research that goes on in universities and think tanks has the purpose of helping to train new generations of researchers. If all research and all communication efforts are targeted policymakers’ immediate needs what will be taught to students? Writing a macroeconomics textbook, a new introduction to sociology book, or similar efforts should be seen as important as putting together a policy brief. More important, in fact. The students who benefit from these research outputs are likely to have an impact on politics and policy far beyond the capacity of any single researcher or research project.
For think tanks, helping to form the next generations of policymakers, analysts, researchers is a core function.
Similarly, a significant number of studies that may look, to the layman or woman, as quite useless, as indispensable for the more practical and use inspired work they crave. Before we can assess the impact of a programme on a population we need to explore theories. collect data, clean the data, test the data, test the theories on the data, maybe test our assumptions, review the assumptions, etc. Each one of these can be a study on its own and each one may take days or years. In the end a ‘finding’ may only be arrived at after years of work during which more than one researcher may have been in charge.
It is not just about making policy recommendations: The purpose of research is not to recommend action. And researchers, even in think tanks, are more often than not influential by their capacity to help decisions makers understand a situation or problem than by attempting to inform or inspire a particular course of action.
Setting the media agenda, or focusing policymakers attention on an issue of public interest is a crucial function of think tanks. Any attempt to understand uptake then needs to consider think tanks’ alternative, and important, functions: to set the agenda, to help explain a problem, to popularise ideas, educate the elites, create and maintain spaces of debate and deliberation, develop critical thinking capacities, ‘audit’ public and public institutions, etc.
Dismissal is uptake too: Uptake is often equated with doing what the paper recommends. But, as I have argued many times before, research does not tell us what to do: only what is going on (or has been going on, or may be going on in the future). It can offer alternative courses of action and assess their effects and their own likelihood. But the choice is ours to make. Research tells one of my best and smartest friends that smoking is bad for his health; but still he chooses to smoke. Research suggests that girls who go to all-girl schools do better academically (particularly in the sciences) than their peers in co-ed schools; who may do better socially than those in all-girl schools. But what is better for them, social or academic skills, is a matter of choice for their parents and not researchers.
In both cases research is being ‘used’: it is considered alongside other ‘evidence’ (e.g. anecdotal: “I know many people who smoke and live to 100″; “I did well at uni even though I went to a co-ed school”) and to appeals to tastes (“I like smoking”), tradition (“I went to that school”), values (“I do not believe in exclusive education”), etc.
Researchers, and their funders, need to understand that by playing that particular role (to provide evidence) they have given up (in a way -they are still citizens and have a right and responsibility to participate in politics) the right to make the final choices. They cannot decide for smokers or other parents -for obvious reasons. But they also can’t decide for politicians. If they wanted to then they should stand for office, campaign, sue the state, etc. But they should not expect that their research work alone ought to be enough.
A little bit of uptake is probably all you’ll ever get: If we are honest, even if we do everything right, it is unlikely that our organisations will achieve more than just a bit of uptake. How much uptake is good enough? I wrote this some time ago: ”According to Hans, Felipe Ortiz de Zevallos (FOZ, the founder of the Grupo Apoyo, quite the celebrity among Peruvian intellectuals and a Latin American think tank pioneer) had told him a few years ago when he was working for him, that: If you get 1 out of 5 right then you must be brilliant. If you get them all right it’s probably because you are not being ambitious enough. So a few hits were ‘pretty good’.”
Too much uptake should be worrying: Think thanks should be worried about having too much uptake; and funders should worry too. If a single organisation becomes so influential as to claim a high rate of success then questions about undue influence over public issues is likely to arise. Think tanks and donors should encourage plurality and probably focus more attention on informing the debate than specific individuals or decisions.
What would the British, Indian or Argentinean public say if a single organisation claimed ownership for most of their government’s policies?
Uptake of bad research is not good: This is a personal observation, but having worked with researchers in a number of countries and sectors I can say, with certainty, that not all research is good enough. For the most part I get the impression that research undertaken by the Aid industry is quite expensive but not necessarily better than what is done in more modest national research institutes. Unfortunately, research undertaken in the least developed countries and certainly in countries with poor tertiary education systems is, in average, sub-standard. The very best researchers in these contexts have already been snapped up by international organisations and public bodies like the central banks.
Still, international development funders and NGOs use them (sub-contract them) to undertake research and influence policy. They often base their own decisions on research done by single organisations rather than comparing multiple sources.
But funders should be careful about asking for uptake unless they are certain that the research undertaken by their grantees or sub-contractors is of the highest quality. And think tanks themselves should be careful that their work is of the highest standards. Quality is their best line of defence against accusations of bias, lobbying, and self-interest.
Uptake should be good only when the process is traceable: Put the last two points together and we can conclude that good uptake happens when good ideas, practices, and people are incorporated into a replicable and observable decision making process. What we want is good decision making capacities and not just good decisions. The latter, without the former, could be nothing more than luck. And in that context, bad decisions are as likely, if not more, than good ones.
From Herodotus’ Histories: “In my experience, nothing is more advantageous than good planning. I mean, even if a set-back happens, that doesn’t alter the fact that the plan was sound; it’s just that the plan was defeated by chance. However, if someone who hasn’t laid his plans properly is attended by fortune, he may have had a strike of luck, but that doesn’t alter the fact that his plan was unsound.”
Bad decisions we can live with (its part of the democratic process); but bad decision making processes are unacceptable. And worse still is keeping these decision making processes out of sight. How else can the citizens of a country hold their politicians and civil servants to account if influence happens behind close doors and policy is discussed in terms that exclude the majority of the population? More worrying still is when those involved in the decision making process are paid-for foreign consultants and ‘think tanks’ -entirely unaccountable and free from the consequences of their advice.
Contribution is as hard to measure as attribution: I find the idea that one is able to measure the contribution of a single piece of work to a policy decision, programme or project difficult to accept. If we accept that attribution is impossible to measure (unless the decision is made on the basis of a consultancy, maybe) then contribution (a share or proportion of attribution) should be equally hard. But more importantly, in every situation (in politics, business, family life, etc.) the ultimate decision maker, who to whom we could attribute responsibility, is whoever it is that makes the choice: the choice to ask, to listen, to consider, to use, to believe, to trust, etc.
Surely, think tanks can say that they played a role but little else beyond that. The story of change produced for the benefit of its funders should conclude: “Think tank ABC played a role in policy XYZ: it produces timely research, of good quality, and made it available to those making the decisions. Its work has been acknowledge and is appreciated.” If instead the story claims that think tank ABC influenced policy XYZ.. well, nobody likes a showoff; specially a dishonest one.
Uptake happens both ways: My main critique to the division of producers, users and intermediaries of research is that it assumes that there is a separation between these different actors. This discussion has reminded me of a blog I wrote over a year ago about the role of intelligence services. Gregory Treverton (director of the Rand Corporation’s Center for global Risk and Security) wrote: “Intelligence is about creating and adjusting stories –this view has crystallised during my career as a producer and consumer of intelligence.”
In practice, what happens is that ideas flow between people (the Policy Brief, the Blog, etc. are simple tools). Also from Gregory Treverton: “At the National Intelligence Council, I came to think that, for all the technology, strategic analysis was best done in person. I came to think that our real product weren’t those papers, the NIEs (National Intelligence Estimates). Rather they were the NIOs, the National Intelligence Officers –the experts, not the papers… If policymakers ask for a paper, what they get will inevitably be 60 degrees off the target. In 20 minutes, though, the intelligence officers can sharpen the question, and the policy official can calibrate the expertise of the analyst. In that conversation, the intelligence analysts can offer advice; they don’t need to be as tightly restricted as they are on papers by the “thou shalt not traffic in policy’ edict. Expectations can be calibrated on both sides of the conversation. And the result might even be better policy.”
There are no Nobel Prize winners for work done last year: Nobel Prizes (probably with the exception of the Peace Prize -but I think we can all agree that this particular one has lost a lot of credibility in the recent past) are awarded for work done years, if not decades, ago. Ideas need time to mature, be tested, replicated, adopted, adapted, popularised, forgotten, rediscovered, etc. The real contribution of a body of research can only be judged in hindsight.
It is arrogant to think otherwise. User fees, import substitution, cash transfers, the green revolution, private pension funds, the list goes on and on, are all ideas that are still there, ticking along, some popular some not anymore. But their effect will only be known when all the dust has settled and we are able to assess their overall contribution.
I could claim success for having popularised the RAPID Outcome Mapping Approach among some think tanks, research projects and donors but if I am honest I should also say that I am not sure if the approach is any good. It makes sense and it can help but at the same time, I’ve seen it used to avoid proper planning and harder-to-make organisational reforms. It is too early and for me the jury is still out.
It takes time to really understand the contribution that an idea makes because the relationship between ideas and decisions is not linear nor clear of any other influences. Ideas come out of, are supported by, explained in relation to, and adopted in conjunction to other ideas. And decisions are made in the same complex manner: within other decisions. A policy to reduce fuel subsidies is made within a larger number of other decisions that may depend on entirely different policy processes, research communities, disciplines, political fights, etc. It may even come down to what the constitution says about subsidies.
Understanding all of this takes time and often has to be attempted after the event.
This is probably why Nobel Peace Prizes do not have the same reputation as the others. In recent years the award has gone for people and organisations for work they just done instead to those with a much longer term history of struggle. Who knows what will happen with Obama, Johnson Sirleaf, or Aung San Suu Kyi (and certainly the EU) could still mess it all up; after all, they are in power, and power tends to corrupt.
One sparrow does not make a summer: Often the focus of research uptake evaluations is a research programme or project or even a single study. This is not surprising as funders tend to be themselves rather atomised and uncoordinated. The governance team funds research, the health team funds research, and so does the education team. Then there are country offices, research departments, civil society funds, etc. And each one of these is further broken down: electoral reform in Kenya and accountability in Malawi, maternal health and child malnutrition, primary education and secondary education, etc.
So everyone is going around trying to assess the uptake of their own research. And why their research? Because it is likely that the funding was provided as a project with a business case that demands an assessment of value for money and impact.
But instead of attempting to assess the uptake or the impact or the contribution of each and every piece of research, donors should attempt to look at the contribution (not measure it, just understand it) that all research or at least a substantial (in terms of density and consistency in time) body of research has had (when the dust settles, of course).
Not only is it expensive but also unreliable to judge the contribution of research by looking at single piece of work. It could have been luck, the policy may just as easily be over turned (it would not be the first time to see a new government changing the policies of the last one -in Latin America it is as if it was part of the swearing-in oath: “I swear to change the policies of the previous government”), new evidence may be just around the corner, etc. Equally, it would be unfair to judge a piece of work for its failure to influence policy: the political timing may be wrong, economic concerns may force decision makers to delay a decision, etc.
It is also limiting in that it does not allow us to trace the many other ways in which research (the process) and research centres to society: developing critical thinking capacity, refocusing efforts, improving the public debate, educating the elites, etc.
It all depends on the research policy regime: It is not the same to ask about the uptake of work that is commissioned and that of work that is unrequested. Surely, it would be quite a waste of time to go about evaluating whether a single consultancy was or wasn’t used. A simple 15 minute interview with those involved may be enough. Unrequested research would be more interesting to study.
In a recent case study of think tanks in Europe conducted with Emma Broadbent she got a rather interesting quote from a highly respected think tank in the UK (non-Aid). It went something like this: “DFID does not know how to work with organisations it does not fund.” When Harry Jones and I looked at DFID’s use of the research and evaluations we found something that confirms this: staff there were more likely to use research and evaluations that they had commissioned themselves; even though all the research and evaluations in question had been funded by DFID.
If the DFID study tells us anything is that the burden of uptake (and proof) ought to be on those who should be taking it and not those (mainly) providing it.
This is why I am not too keen on spending too much money attempting to communicate impact evaluations or RCTs (or evaluating their uptake). It is rare that one of these rather expensive studies has not been requested by the government or those who want to use the information.
It all depends on the research and the policy in question: It is also the case that uptake is highly dependent on the specific idea that we are dealing with and the policy (behaviours, processes, discourses, etc.) that the idea affects. Uptake on research related to the quality of the food rations provided in soup kitchens or school breakfast programmes may be easier to achieve than for research related to electoral reform. Is one more important than the other? The former may take little effort (sometimes) while the former may even cost the researchers their lives (sometimes). Should we only do popular (commissioned) research? Research with high likelihood of uptake? Surely the answer to this must be a clear No!
Uptake is, more often than not, opportunistic and a matter of luck: Since researchers do not control the political agenda and they certainly do not control the fluctuations of the economy, social conflicts, natural disasters, technological jumps, and other critical junctures, they do not control when their ideas may come into fashion, be needed or applicable.
Elections, earthquakes, financial crises, scandals, democratic ‘revolutions’, a hike in commodity prices, the introduction of the iPad, and the popularisation of Twitter, among other factors are likely to explain the adoption of dormant ideas than anything else.
It all depends on others: If the research done is of high quality, is relevant, useful, etc.; if it has been communicated in the most appropriate way (The Taxi Driver Test.); in other words, if everything that the think tank or the researchers can control is done well: then uptake is really up to others. When the Republicans are in control of the Senate, Brookings gets called to provide evidence fewer times than when the Democratic Party is in control. It does not matter how much Brookings pays for the top researchers, or the attention it puts to its work, how it is presented, etc. In the end, what matters is that the Republicans think Brookings is a liberal institution and therefore does not share the core believes and principles of their Party.
So when funders look at the uptake of their grantees’ research they should probably be paying more attention to their grantees audiences (which include other social, economic and political actors such as NGOs, professional bodies, grassroots, the media, lobbies, etc: they ll have roles to play) than to their grantees (if, and only if, their grantees are doing all they can do right: it makes no sense at all to be looking for uptake if the research is of poor quality or if was poorly communicated.) So:
- For the think tanks: Do you have a plan? is it sound? was it properly delivered?
- For the audiences: Did you use the research (the body of research, the ideas, etc… not the single paper)? Why?
Finding out why? is where things get really interesting. Last year, working in Serbia, a policymaker with a background in research and great contacts with researchers and think tanks told me that often what think tanks do not understand is that the reason why their recommendations do not get adopted is not that policymakers did not think they were good ideas but because the policymaking rules and processes that govern them (the bureaucracy) have the power to kill any idea: good or bad. Organisations have this incredible capacity to deal with disruptive ideas in the same way that a body deals with an infection or an allergen. Their structure, regulations, rules, and cultures are there to protect them from change.
So the reasons why an idea is adopted or not (by politicians, civil servants, other researchers, journalists, lobbyists, NGOs, the public, etc.) can range from: the level of capacity (personal and organisational) to understand and use it, interest and opportunity, overlap with personal experience, its opportunity cost, etc.
All of which means that if we want to really understand uptake we need to really understand the communities that we want to adopt the ideas we are tracking. Nothing short of an ethnographic study will do.
Uptake is a lot of things that do not have to be measured -but should be understood: In conclusion, uptake can be a lot of things depending on the organisation, its strategy and objectives, the context in which it works, the issues it deals with, the audiences it targets, etc. What matters most is that as many of the different contributions that think tanks can make are recognised, described, understood, and valued.
The problem is that if we recognise that the contribution of research can happen in such a broad range of ways, over time, and unexpectedly, measuring it becomes an almost impossible task. Being certain that, even if adding caveats such as ‘may have’ or ‘is likely to’, X influenced Y is not only sloppy research but also dishonest. What a think tank can do is answer for its own actions: research quality, communications capacity, etc. Similarly, funders can make sure that their funding is provided in the most appropriate way.
But beyond that, it is all guess-work.
If you are not sure of the return, don’t invest: This is a bit of an afterthought point. Something that has caught my attention over the years is that this, the Aid Industry, is the only industry in the world where the investor invests and only then asks about the rate of return (this is also the only industry where competitors are allowed and encouraged to partner when bidding for public contracts!).
Research funders have no problem allocating hundreds of millions of dollars, euros and pounds to increasingly large research programmes without proper consideration of the absorptive capacity of the research communities they are targeting (which leads to hugely inflated and unsustainable salaries) or the expected return on their investment. Instead they ask the organisations they have just handed out millions to demonstrate, ex-post, that they knew what they are doing. It is as if a FTSE100 corporation invested in or signed a large contract with a company in Ghana without having visited it once, checked their books, assessed the capacity of its staff, evaluated their work so far, and considered its capacity to take on the additional work implied in the new investment or contract.
For most donors a simple proposal will do. Maybe an interview to discuss the project will be added to the process but that is just about it. Hundreds of millions are awarded in this way. And in the end, the burden falls on the recipients to demonstrate that the projects they have been contracted to implement will in fact deliver their objectives (projects, mind you, that have often been designed by the funders and not by the recipients of the funds -supposedly to avoid conflicts of interest). If you ask me, it makes little sense.
Funders ought to make the case for research to whoever they have to make it and only then fund. Think tanks should be expected to do good and relevant research and to communicate it well and appropriately; but not to justify their existence. If their funders do not like them any more or do not think they are useful or valuable to the societies and communities where they work, then they should stop funding them and that should be it. This is a right they have -and in fact it ought to be their responsibility to make up their minds about this. But they will have to explain why.
Funders like DFID, AusAid, IDRC, Gates, Ford, the Open Society Foundations, etc. clearly value research and the recognise the positive contribution that think tanks can make to a society. They would not be funding them if they didn’t and they can see the positive effects it has in the societies from where they have found inspiration to invest in think tanks. They should, and some are beginning to, focus their attention on quality and pay less attention to whether or not they have impact.