Skip to content

Posts tagged ‘policymakers’

What do policymakers want?

What do policymakers want? Three studies, one from the White House, one from Whitehall, and one from Australia, provide excellent information and insights into what they need from researchers and how they prefer to get it. You may not be surprised to find that they like theory and distrust anyone who talks more jargon than them.

Read more

An underappreciated benefit of experiments: convincing politicians when their pet projects don’t work | News, views, methods, and insights from the world of impact evaluation

David Mckenzie’s post on Development ImpactAn underappreciated benefit of experiments: convincing politicians when their pet projects don’t work is worth reading.

It agues that impact evaluations can help to stop projects that do not work. However, this raises a question often not answered: can we test everything?

Are there any ‘experiments’ that we should simply avoid? Think tanks can make use of new methods and approaches to strengthen the evidence base in their arguments but they should be careful not to remove all signs of values and principles from them.

In any case, the reason this post caught my attention was that impact evaluations (if we can do them on the cheap) could encourage policymakers and politicians to experiment without having to stick by their ideas (even if they end up being outrageous). If they were able to explain the concept of pilots to the public then they may be able to promote an acceptable culture of innovation and reduce the fear of loosing face that many politicians and bureaucrats have.

Never mind the gap: on how there is no gap between research and policy and on a new theory (Part 3 of 3)

[This is the last of 3 posts on the Gap between research and policy -or the absence of one. In it I try to outline a new theory to assess the roles research may play in policymaking.]

First post (1 of 3)

Previous post (2 of 3)

A new theory: Density

If anything, the larger the project the easier it should be to influence change. According to Mirko Lauer, a Peruvian journalist, the key factor that explains why some policy issues are more likely to be informed by research based knowledge than others is the density of information about those issues. At an event for a Trade and Poverty in Latin America programme in Lima in 2009, he gave the example of economic policymakers in Peru who, as well as the general public, are exposed to a myriad of publications that provide an effective vehicle for research, analysis and opinions on almost all economic policy issues: all the mayor national broadsheets have a daily economics section, there are at least two national specialised monthly magazines, a number of weekly magazines and newsletters, The Economist, Business Week and the Financial Times are readily available, and there are many more email or web-based information services. Furthermore, the Ministry of Economy and Finance, the Central Bank, and the National Statistics Office have their own internal research teams, databases and publications.

Hence, Lauer argued, all economic decisions, even if they turn out to be mistakes (which would undoubtedly be picked up by specialised journalists and researchers and fed back though the mentioned channels), are necessarily informed on some evidence. This is clearly not the situation for other policy issues such as those related to indigenous groups, the environment, reproductive health, security, etc. where research and general information and arguments about them is sporadically gathered and communicated, and subject to the ‘what bleeds leads’ mantra of newspapers and public opinion.

This can explain sector or issue differences as well as the perception of individual players about the distance between research and policy. Systems with high density would have a clearly busy middle ground where multiple links between actors provide frequent and substantive interactions between research and policy. In less dense systems, on the other hand, the connections between research and policy would be fewer and less frequent making their members feel similarly isolated.

The concept of density might be difficult to accept in a context driven by the returns agenda. Density implies that efforts made by research centres and their funders should be focused on providing more and better knowledge to the public rather than communicating it more strategically to specific audiences. It implies, in some cases, creating and supporting credible competitors in the knowledge market rather than fighting to become the most influential or the most credible or the only alternative. For researchers, it means learning to rely on others to communicate and translate their work to reach non-expert publics.   For donors, it means that measuring the returns to research funding would be even more difficult than it is now; and learn to live with it.

Nonetheless, density is a concept that works well with the idea of a political system in which knowledge actors are just one more player and recognising that what matters are the links between research and policy, and not between researchers and policymakers.

Not all research is equally good

To make things more complex, not all research is of the same quality. More research is not, in itself, good. (And logically, not all researchers are good either.)

A recent World Bank study by Jishnu Das, Quy-Toan Do, Karen Shaienes and Sowemya Srinivasan on the geography of academic research shows a striking reality. They found a direct relation between per-capita publications on a country to its per-capita GDP. More significantly, though, they found that “1.5% of all papers written about non-US countries are published in the top-5 economics journals compared to 6.5% of all papers written about the US.” What this means is that for the poorest countries there are fewer economics peer reviewed studies available:

“Over a 20-year span dating from 1985 to 2004, the top 5 economics journals together published 39 papers on India, 65 papers on China, and 35 papers on all of Sub-Saharan Africa. In contrast they published 2,383 papers on the United States.”

Whatever may be said about the biases of these journals, more worryingly and pressing is that on the 20 poorest countries only 3 papers per country are published every 2 years. In the last 20 years, the only 6 papers written on Sierra Leone have been published in the main 202 peer-reviewed journals; 6 on Somalia, 16 on Rwanda, and 76 on Uganda. What are policymakers in these countries basing their economic policy decisions on?

In Zambia, members of the Central Bank and the Minister of Finance told me that the World Bank and the IMF are key sources of information. And what about research and analysis from Zambian researchers? Besides their own, pretty much nothing.

A key cause of this is that there are not many good researchers out there. The few highly competent researchers in poor countries are quickly identified and hired by their central banks (often the best staffed), the private sector and international donors. As a consequence, universities and local research centres are left (is this politically incorrect?) with second-rate researchers. (This is not their fault: blame their countries’ education systems.)

Nonetheless, anyone working on international development knows that there is quite a lot of information on some of these countries. Donors spend billions of dollars on research focused on the poorest countries in the developing world. Sub-Saharan Africa might not feature in the top economics peer reviewed journals but it is certainly the bread and butter of organisations such as ODI. For example, during the same period, ODI alone published 13 publications on Rwanda and 5 on Sierra Leone (not all about economics, though).

In any case, this means that many of the decisions made on economic policy by policymakers working on the poorest countries are largely based on non-peer reviewed research. And the research that is available, because a lot is being produced, comes from largely second-rate researchers and organisations (many of which are advocacy NGOs, not researchers).

Density then cannot be about quantity alone, but also about quality; and quality may be measured in terms of the characteristics of the outputs or their production and dissemination processes.

A new strategic direction

I have already outlined some recommendations for programmes or organisations like RAPID but before I provide some additional detail about these I would like to further explain this new theory to understand research uptake.

In this theory, lets call it the Density Theory, decisions take place within political spaces, which exist around specific policy processes but are connected to all others. These spaces involve different actors and belong to a broader political system (as we have seen above). In each space there is, depending on factors like the sector and issue or the actors involved and interested, a certain amount of information communicated through different means to all those directly or indirectly involved in the policy decision. For two spaces of similar size (number of issues or actors, length of the process, populations affected, etc.) the one with more availability of information will have higher density than the other. In other words, where there is plenty of information available about a particular policy issue through a number of competing and complementary media (including think tanks, newspapers, NGOs, government bodies, etc.) there is high density; whereas where little information is available or where there are few or not readily available sources of information there is low density.

The level of density will, however, need to be qualified by the quality of the information that is readily available. So, more density is not necessarily better than low density if all the information available is based on one-sided opinion, un-checked data, and untested assumptions. But this is probably something we are likely to only be able to notice if we oppose the arguments ourselves or with the benefit of hindsight.

The spaces themselves can also differ in their political relevance -or how many people or who they are relevant to. A space that draws significant public political interest then, may be described as one where a large number of people or well-organised political/interests groups are affected and therefore play a significant role in shaping the policy process and its outcomes. Spaces with low political interest, in contrast, would be those where decisions are not highly contested either because there are not known by the general public or because they are of no direct relevance to them or political/interest groups.

The following table describes this relationship:

High densityLow political interest

Decisions are likely to be informed by research based arguments and this will be chosen and used using technical or academic criteria

High density

High political interest

Decisions are likely to be informed by research informed arguments but this will be chosen and used using political criteria

Low densityLow political interest

Decisions are likely to be made by individuals or small and closed policy or interest groups and these are likely to be informed by personal values or interests or by the little research that is available

Low density

High political interest


Decisions are likely to be informed by political considerations and based on electoral demands and the public’s values or interests: in other words: votes count

The quality of the available information is likely to affect the outcomes of the decisions made.

What does this mean in practice? In general we would want to reduce the number of issues that are decided on votes alone and promote, as much as possible both public political engagement as well as decisions base don research based arguments. In some circumstances, political imperatives will be more adequate for the choice of arguments and their application (for example, the development of a school curricula –which must take the public’s views and values into account); but in others, academic or technical imperatives could prevail (for example, national security decisions).

More specifically, this new theory provides some practical recommendations to our donors, researchers, think tanks and to the rest of us.

First of all, we should pay more attention to the system as a whole and take notice of how its various components relate to each other in different contexts, sectors or policy issues –and we should take particular care of the different perspectives that various actors have of the whole system. Saying that the context matters is by now a cliché: it is about time that we attempted to unpack this and did something about it.

With an appropriate understanding of the system, donors could deploy a sort of Marshal Plan for high quality research that attends the clear research capacity shortfalls of many developing countries and simultaneously funds a significant increase in the flow and stock of research products with a particular attention to those of academic quality and policy relevance (which does not meant only policy relevant academic research; it s possible and it would be great to have both). This, of course, cannot happen without a commitment to a reform of (and in some cases a brand new) higher education policies.

Equally, donors should focus their efforts on developing and promoting strategies to strengthen the knowledge systems’ absorptive capacity. This includes the development of more specialised media publications and individual journalists, more spaces for policy debate, strengthening the relations between policymaking bodies, political parties and think tanks (internal and external), and developing analytical skills within policymaking bodies, among others.

More immediately, research programme like RAPID should pay more attention to the system and all of its actors –and not just researchers and their centres- when studying the role that research plays in policy processes. Their questions should focus on the factors that affect the various roles that different types of research play in different contexts. This will give them, and the researchers they work with, a much better view of the system as a whole and assist then in supporting them with their planning.

Support hence should continue to focus, more intensively, in the development of strategies to navigate through the system rather than to attempt to bridge the mythical gap: more in-depth political economy analysis capacity rather than communications tips. Applications of complexity theory that may help understand complex systems will be critically helpful –social network analysis should be on everyone’s to learn about list.

Navigation may make use of tools but nothing can replace the judgement of the captain and the crew.

Finally, we should also take a step back and reflect on how research is carried out in developed and developing countries. If we want the system to be injected with a healthy dose of research based arguments we need to unpack the institutions, structures and processes that make this happen –or prevent it. Why is it, for example, that peer-reviewed economic research seems to be almost entirely focused on the U.S.? What drives the sudden demand for support on writing policy briefs?  Who is doing research in the least developed countries? How is this funded? Why is there little national funding for research? Etcetera.

Corruption free think tanks

The work of think tanks is never straight forward. Researchers do not exist in an institutional vacuum with no links to the real world. They must access information that is often kept behind closed doors, engage with policy makers whose agendas are controlled by often unscrupulous people, etc.

This, however, does not mean that think tanks must bend or break the rules to work. Jeff Knezovich’s post in this blog last week made a passing reference to the new UK Bribery Act. I am sure there are similar such laws in other countries so this is equally relevant to organisations working outside DFID’s sphere of influence.

According to the Bribery Act:

What is a bribe? All payments of bribes, no matter how small or routine, or expected by local customs, are illegal. You are breaking the law whether you give or receive a bribe. Unlike some anti-bribery laws, the Bribery Act applies to bribes paid both to public officials and within non-public operations. A bribery offence is committed if the intention of the briber is that the person being bribed improperly performs his/her duties. Improper performance will arise if it is intended that, by paying the bribe, the recipient of the bribe would be expected to act otherwise than in good faith, an impartial manner or in accordance with a position of trust. Expectations are judged by UK, not local, standards.

And it is particularly relevant for organisations who claim to be promoting transparency and accountablity.

Here is an anecdote that I have enjoyed telling for quite some time:

I had been invited to a meeting of the new grantees of a programme seeking to promote transparency and accountability in policymaking. I was there to talk about policy influencing. A past grantee had been invited to talk about their own experience in doing so. They used schools monitoring data to make recommendations to improve the quality of education in Guatemala. The process involved working with the school monitors who collected the data that they then used to do the necessary analysis. As he was describing this he was interrupted by one of the participants who asked how much they had paid the monitors. The presenter was a bit confused. Nothing, he said. They were doing their job, we just asked them for the information, which is public anyway. Again, the man who interrupted responded by saying that they should pay them because they were acting as their research assistants.

Before the presenter could say anything a debate started among the participants. Researchers from different parts of Africa intervened saying how it would not be possible for them to do any of this without paying them for the information; some argued that this was right and normal, others that this was not advisable -although the reason given was that these small incentives would translate into large payments once the project got to the top and tried to influence senior policy makers ‘there would not be any money left in the project.’ At this point, the representatives of the donors left the room -they clearly did not want to be part of the conversation. Good timing too because the presenter interrupted to say that he did not want to judge but what they were suggesting was not possible in his country because, to put it bluntly, it was corruption. 

No, it is just an incentive, said someone. 

So if you choose not to look the other way, Transparency International provides some useful recommendations based on the finding that :

among NGOs anti-bribery procedures are either poor or non-existent. This is often explained by the difficult circumstances in which NGOs are operating on the ground. Paying a bribe is seen as the only way to get things done.

TI has published  some very useful tips for fighting corruption:

Conduct a risk assessment: where is your organisation exposed to a high risk of bribery – and how effective are its anti-corruption policy and management systems?

Introduce a zero-tolerance policy: put in place a headline policy that notes the damage that corruption does to your goals and mission, the importance of strong internal anti-bribery systems and makes it clear that the organisation does not tolerate bribery in any form. Anything less will provide a weak defence under the Bribery Act.

Information gathering: it is important to know whether bribes are being paid by your employees, agents or partners – and if so where, how much, and how frequently. This information is crucial if the organisation is to implement a zero-tolerance policy and, where necessary, try and ‘design out’ bribery from future projects or operations. Paradoxically, creating such a paper trail may provide evidence in a prosecution. However, such information gathering would probably be regarded as part of an ‘adequate procedure’, and therefore failure to assess the extent of bribery in an organisation might create a liability for senior managers and directors who could be accused of ‘consent and connivance’ by turning a blind eye.

Put in place robust anti-bribery systems: having in place ‘Adequate Procedures’ is the only defence to protect an organisation against corporate liability under the Bribery Act. TI produces a 20-point checklist for companies to assess anti-corruption procedures. Although it is aimed at companies, it is also relevant to NGOs. TI is seeking funding to develop NGO- specific tools.

Training and support: implementing effective anti-bribery systems can be a difficult process, and employees and partners may feel vulnerable and ill-equipped, especially in a transition phase from one way of doing things to another. Proper training and support is a vital part of this process.

Science limits our policy options -an event from the Institute of Ideas

A very interesting debated hosted by the Institute of Ideas on evidence based policy (or the other way around?).

Speakers: Nick Dusic; Dr Evan Harris; Robin Walsh; Jeremy Webb
Chair: Tony Gilland

In every field from climate change to education, politicians increasingly defer to experts, and scientific experts in particular. The government has surrounded itself with scientists, and politicians from all parties seems keen to cite experts evidence-based findings whenever they want to push new policies, whether early intervention in families to climate-change strategies. Too often the phrase the science shows is used to close down any possibility of debate; facts are seen to trump morality and politics. But is this indicative of a new respect for science, or rather a lack or political principle? Far easier to wave a peer-reviewed research paper and proclaim the science shows we have no choice rather than trying to convince the public of the merits of one or other policy decision.
Indeed it seems that when politicians don’t go with the science, they find themselves without a leg to stand on. When the government sacked drugs advisor David Nutt last year, this was widely seen as an unwelcome and sordid intrusion of politics into questions better left to the experts. So do we live in a scientocracy, and if so, is this a sign of enlightenment and political maturity, or should we be worried about the undermining of democratic decision-making? Has there been an elision of the natural and social sciences, with the latter borrowing the authority of the former? Does the elevation of scientific expertise obscure unexamined political assumptions and orthodoxies? Indeed, is there a danger that policy-led research subtly reproduces political prejudices rather than uncovering genuinely inconvenient truths?

Nick Dusic’s presentation is particularly interested. Although keen to see evidence to inform decisions he highlights the need for the right institutions to ensure that evidence makes it to the decision-making process and accepts that evidence does not necessarily need to substitute ideology or values: they must work together in opening up the debate.

Another interesting intervention is by one of the participant who said that people (including policymakers) do not understand the scientific method. This is important. We often focus on whether there is research based evidence included in the policy decision and do not worry about whether the decision makers understand what the evidence means or where it comes from. This, in my view, is potentially quite dangerous and could lead to the dumbing down of decision makers. It is my view that it is more important for decision makers to know how to learn about something than for them to know it all. This is what, whether the reader likes it or not, was the original purpose of universities which focused on developing the capacity to think rather than the accumulation of often meaningless titles and diplomas.

In social sciences there are a great deal of contradictions in research findings and if decision makers cannot understand why this is the case and are also unable to explain it to the wider public then they are likely to cherry-pick what creates the less hassle and conforms.

So the knowledge of scientific method (or research methods) and the existence of the right institutions ought to be the focus of any attempt to promote evidence based policy -rather than the myriad of projects, trainings, websites and infomediaries that seem to be all the rage among development research funders: smarter people in a smarter system.

Anyway, an interesting debate.



Get every new post delivered to your Inbox.

Join 7,065 other followers