‘Research TO Policy?’ Reflections on a Persistently Intriguing Debate

26 February 2014

[Editor’s note: This post has been written by Peter da Costa and is also being published on ‘Behind the Numbers: The PRB Blog on Population, Health and the Environment’. Peter is based in Nairobi and can be found @PeterdaCosta2.]

A few weeks ago I facilitated a session on ‘Tools for Bringing Research to Policy Makers’, on the final day of the 8th Annual PopPov Conference on Population, Reproductive Health and Economic Development, held in Nairobi, Kenya. It’s not always a given that much can be unpacked or chewed over in any substantive way on such a complex and contentious subject in an hour and half.

However, I was lucky to have a fantastic panel of high-level researchers and funders’ representatives who were able to cut to the chase and share fascinating and deeply insightful examples of how they harnessed research, in practical ways, to engage in policy spaces.

The Panel members were Jan Monteverde Haakonsen of the Research Council of Norway; Chima Izugbara of the African Population Health and Research Center (APHRC); Susan Rich of the Population Reference Bureau (PRB), which organized the event; and Veronique Filippi of the London School of Hygiene and Tropical Medicine.

For more on what they said and for a great summary of the Panel, please read the blog prepared by Kate Belohlav.

The PopPov annual conference is a gathering of seasoned and student economic demographers, population policy people and advocates, designed to incubate the next generation of scientists, build up an evidence base, and stimulate dissemination of research to policymakers. It’s part of a wider initiative, seeded by the Hewlett Foundation, aimed at strengthening the evidence base of the links between reproductive health and population dynamics. PopPov, administered by the PRB, provides research grants, doctoral dissertation fellowships, and organizes the annual conference.

I kicked off the session with a series of assumptions that spring to mind when we talk about ‘bringing research to policy makers’:

  • First, when we speak of ‘from research to policy’ or use the demand and supply analogy we assume the process to be a linear one, starting with generating the evidence and ending with policy makers consuming that evidence.
  • Second, we assume that policy makers make decisions based on evidence; or put otherwise, that evidence makes for sound policy.
  • Third, it’s assumed that there’s a need to bridge some sort of gap between the evidence and those who need evidence to inform their decisions.
  • Fourth, we assume that policy makers need help in accessing evidence, which needs to be ‘brought’ to them.

Interrogating the ‘research-to-policy’ nexus raises a million questions about how and why evidence is generated, what kind of evidence, what forms it takes, what expectations exist around its potential impact on policy or society, how policy makers think of or interact with evidence, whether evidence is intended to immediately inform or impact on policy, whether knowledge generated can be useful down the line or whether it sits on shelves, who ‘pushes’ evidence to policy makers, and so on. [For a discussion of these questions in the African context see these papers on the political economy of research uptake in Africa.]

After the initial presentations, I threw out some provocative questions – most of which will be familiar to R2P wonks who ruminate over these questions in their sleep, but not so evident to researchers (who are usually narrowly focused on their research questions) and funders (who are usually determined to pursue their goals and sometimes ignore evidence that casts doubt on their theories of change).

  1. To what extent are policies in your field of specialization evidence-informed? What are some of the factors that determine whether, and to what extent, evidence informs or even influences policy decisions?
  2. Where does the balance lie between supply and demand? Is demand a necessary condition for the uptake of research? Or can well-conceived and compellingly packaged research findings stimulate the interest of policy makers?
  3. Whose knowledge counts? Does the origin of the evidence matter to uptake? Is origin irrelevant as long as the evidence is robust and incontrovertible? Or is locally generated, locally owned evidence likely to be more acceptable to local policy makers?
  4. What kind of evidence is needed, to what end? To shift knowledge and attitudes? To change behaviour? To inform and influence practice?

Drawing on the vibrant discussion we had in Nairobi, let me attempt to provide my personal take on some of these questions:

  • Evidence doesn’t always or readily inform or influence policy. Sometimes it has no effect whatsoever – at least not in the linear way that many of the theories of change assume. Influence is often intangible and down the line. Evidence may be looked for or referenced much later on, when policy makers are looking for solutions to a policy problem. This of course challenges the measurement and ‘impact-now’ geeks to be patient about policy impact, in a word of scarce resources and increasing pressure to account to politicians.
  • Implementation is the elephant in the room. The history of development is littered with good ideas that have never been implemented, or at best have been partially implemented. While some policies inevitably need to be revisited, and new policies need to be developed to address emerging and new challenges, a lot of the problem of development relates to implementation failure. How to ensure a Government stays the course with reforms it has painstakingly developed and invested in, represents one of the abiding challenges of our times. Evidence can play a role in and around implementation – whether gathered via systematic reviews, evaluation, or other types of approaches.
  • Policymaking is about politics, and evidence is only one of many variables in playing politics. Policymakers may or may not prefer or feel comfortable with locally generated evidence, depending on whether it suits their political purpose. Externally generated evidence may be rejected for being produced by ‘outsiders’, while locally generated evidence may be dismissed for being ‘partisan’ or ‘not rigorous enough’. Evidence that is generated without understanding of the political context is essentially useless.
  • Most intuitively agree that local research has a better chance of being listened to than research from outside. But a lot of work is needed to generate the supply of high quality evidence that is relevant and addresses the specific context and problems to be solved. Research, whether done in think tanks or universities or elsewhere, need to be consistently high in quality. But quality is not enough. More thinking and energy need to be devoted to value-added packaging of the evidence in forms that will resonate with different policy audiences. Consumers of research need to be able to articulate what they want so as to influence or inform supply. Policy makers need to get into the habit of taking positions underpinned by evidence.
  • There needs to be a serious and sustained investment in homegrown research. Why is there such limited investment in research in developing countries, and what can be done to change this? Why do many prominent Northern investors in Southern research inevitably provide most of their funding to Northern centres of research excellence, whilst paying little or no attention to the painstaking and long-run enterprise of building and sustaining local capacity?
  • Donors would rather fund world class Northern institutes than a small research outfit in the East African highlandsAfter all, in highly specialised and technologically intensive fields such as vaccine development, the imperative may be to empower the best minds to develop the best technologies. Many would say this approach is perfectly justifiable if we want to prevent disease in the best ways in the shortest time possible. But often, such approaches to supporting research not only elicit and engender negative responses from Southern policymakers (or provide them an excuse to ignore the evidence, however compelling); more importantly, they also ignore the tremendous potential in developing countries that could be harnessed to solve health and other challenges prevalent in these regions.
  • Can policy-oriented research be ‘independent’ of context? Policy makers and policy implementers are overwhelmingly governmental and evidence is inevitably value-laden (informed by hypotheses, assumptions, theories of change, etc.) – so it’s a fiction to imagine a word in which neutral scientists come up with neutral research and then neutral policymakers take up that research and make neutral policy in the public interest. On the other hand, it is perfectly correct to insist on research being rigorous, methodologically sound and conducted under scientific conditions. Otherwise it would not be credible and people would not take it seriously. But does ‘independence’ mean research must be funded independently of government? Or does this only apply in the developing world? What does an ‘independent’ think tank or ‘independent’ university research centre look like, for example? Governments have a critical role – so it should be less about proximity or otherwise to governments, than about figuring out how best to use credible evidence to persuade governments to do the right things policy-wise.
  • Why bother pressuring researchers to be more policy-oriented? Several researchers I spoke with during PopPov complained that they simply did not have the skills required, let alone the time, to sell their research findings to policymakers. I agree. Researchers are like rodents trained to sniff out explosives – you cannot ask these highly trained rats to develop counter-terrorism strategy. Other animals are needed for that. OK, researchers are not really like rats, but you get what I mean.
  • All too often, the R2P community has insisted we need to turn researchers into communicators and policy engagers. It’s not that simple. Researchers are not an undifferentiated mass of policy ignoramuses. I often meet researchers who are particularly gifted in engaging policy. We need to work with these early adopters. But we also need to call a spade a spade and realise that the community of researchers is diverse and that many need to be left to focus on what they do best.
  • Researchers must continue to publish in the forms that guarantee their credibility, respect among their peers. The Panel audience mused over whether asking researchers to make their research findings more policy relevant ran the risk of rendering evidence in too simplistic a manner, leading to credibility problems for the researchers amongst their peers. There’s always a risk of this happening. But at the same time, the knowledge research generates or adds to must be translated into different forms that will serve different audiences. In a world where we need to solve pressing problems (climate change, population growth, HIV/AIDS, malaria), we don’t have the luxury of doing only research that is pure, that only econometrists or economic demographers can even remotely understand.
  • If not researchers, who should communicate evidence to policy makers? Research needs to be viewed as a dynamic process that involves not only research, but a range of other functions, across the full spectrum – from ascertaining demand for evidence, to designing and generating research, to determining in what forms it should be shared, and with groups of policy stakeholders, to translating or repackaging it to engaging with policy makers, to taking that engagement to the next level. Linked to this, research should be communicated throughout the cycle – to test ideas and share intermediate results; to inform course correction; to stimulate policymaker engagement with emerging evidence; to test the waters in terms of policymaker receptivity to emerging findings; etc.
  • A range of expertise is needed to deliver these functions. Yet we do not invest enough energy in brokerage, translation, or whatever you choose to call it. We persistently relegate this important set of functions downstream. Researchers organize a dissemination workshop, write a policy brief, tick the box and say their work is done.  There’s a lot more to ‘R2P’ than that…