Understanding the demand for World Bank research within the Bank

1 February 2012

A year or so ago, DFID asked us (in ODI) to assess whether its research and evaluations were being used by DFID staff. Harry Jones and I developed an approach that focused not on the studies themselves (the supply) but on the way that DFID staff made choices and the roles or functions that evidence played in those. Evidence, we assumed, could come from different sources, one of which could be DFID research and evaluations. We did not, however, want to bias our study by focusing on them. And we felt that to say anything useful about the use that these had within DFID we had to do it in a wider context of all other inputs to decision making.

A recent study from the World Bank takes a different approach. Martin Ravallion’s study: Knowledgeable bankers ? the demand for research in World Bank operations, focuses on the demand for World Bank research. It does, however, share some of our conclusions:

The methods used affect demand:

Today’s research priorities may well be poorly matched with the issues faced by practitioners in these sectors. For example, the current emphasis on randomized trials in development economics has arguably distorted knowledge even further away from the hard infrastructure sectors where these tools have less applicability (Ravallion, 2009). Making the supply of research more relevant to the needs of development practitioners would undoubtedly help.

Absorptive capacity is crucial:

The differences across units in the demand for the Bank’s research are correlated with the incidence of PhDs and economists, suggesting that internal research capacity in operational units helps create absorptive capacity for knowledge in those units.

Researchers need to make an effort, too, however:

The slope of the relationship between perceived value and familiarity with research is positive but significantly less than unity, suggesting frictions in how the incentive for learning translates into knowledge. The responsiveness of researchers and the timeliness and accessibility of their outputs are clearly important to how much learning incentives lead to useful knowledge.

This study also finds two models that explain how research affects decisions:

In the first, they have a demand for knowledge that does not stem from its direct bearing on their work. Much development research is a public good. Practitioners might read research findings to better understand the world in which they work, even when that understanding is essentially irrelevant to the specifics of that work.

Alternatively, in the second model, research has a direct value in the work of practitioners—such as by informing project choices at the entry stage and assessing impacts later on—and research findings are sufficiently relevant and accessible to assure that practitioners become well informed.

We found a few more options, depending on the type of decisions that staff had to make:

  • In some cases, evidence generation was incorporated into the policy cycle
  • In others, evidence was used to make small incremental changes and corrections to ongoing policies and programmes
  • In other cases, evidence had to catch up to events
  • And in others, more often than not, it was used to make sense of political demands

We concluded that DFId was better at using the evidence and learning from it. These are two different things. It was also:

much better at using research and evaluation findings during or as part a project cycle than in more complex and emergent decision making processes.

In other words, it was better at working with a consultant than with an academic (or so I liked to think about it) -and this resonates with Ravallion’s finding of the importance of staff capacity.

This, in turn, points towards a possible mismatch between the ideals and realities of lesson- learning in DFID. For example, research is largely done outside the organisation by increasingly larger consortia with clear incentives to communicate to audiences other than DFID. The incorporation of their findings into DFID policymaking processes then depends on these programme‟s communications capacities, intermediaries (both technology based and knowledge brokers), and DFID staff themselves –who, according to the study are under increasing time pressures that reduce incentives towards engaging with research and evaluation processes and the analysis of their evidence and findings.

To us this meant that DFID was pushing research away from itself making it a foreign concept. Its efforts to bring it back by hiring Knowledge Brokers (PhDs) to mediate did not seem to fit with what emerged as the more effective model:

The system emerging from this is one where intermediaries between research and evaluation and policy and practice play a significant role.

In other words, learning in DFID (of the kind that promotes the incorporation of analysis into decision making and the development of a learning organisation) is more akin to a system with fewer intermediaries and more direct relations between users and producers of knowledge.

I liked this conclusion because it fits nicely with a belief that we need to pay more attention to people in this business of international development.