The relationship between corruption and evidence-informed policy (EIP) is a mutually reinforcing one: corruption can erode the very foundations of evidence-based decision-making, while EIP, itself, can be a potent tool in fighting corruption.
Corruption undermines trust, it is sustained by misinformation and outright lies.
To put it simply, corruption is the enemy of EIP.
By embracing this, the EIP field can make strides towards a world where policy decisions are not only informed by credible evidence, but are also more resistant to the pervasive influences of corruption.
Yet there is an absence of dialogue between the field of EIP and other fields that hugely influence the success of EIP research and practice.
In this article, I outline some of my emerging ideas about this. I hope that it will open a new conversation.
The expansion of the EIP field
Over the years, as the EIP field has developed, its concerns have expanded to address the generation, communication and use of evidence. It now involves multiple public and private actors in the policy research ecosystem.
For example, it’s now directly concerned with matters related to equity and diversity and, soon, it’s likely to take on the challenges and opportunities presented by artificial intelligence (AI).
The agenda of the World Health Organisation’s (WHO) Global Evidence-to-Policy (E2P) Summit 2023 offers a good illustration of the range of issues covered by the field of EIP. (I’m still surprised by the use of 2 (to): don’t we all know that it is not a one-way, linear process?)
However, the field also appears to avoid some aspects of the policy research ecosystem, including corruption.
I find this puzzling and concerning. Puzzling, because corruption is impossible to miss – at least, it’s impossible to miss in the international development sector and certainly when it concerns EIP in much of the Global South. And concerning because if we do not account for its effects on EIP, how can we expect to make a difference?
So, why is it almost absent from most major reviews and studies on EIP?
The relative absence of corruption in reviews and studies
A simple search of the 2022 Evidence Commission Report fails to find any reference to corruption.+
Also, the Hewlett Foundation’s 2023 landscape scan of evidence-informed policy-making in East and West Africa only refers to corruption in relation to what evidence can do to fight it – but not in relation to what corruption does to EIP. +
In the European Union’s (EU’s) Science for Policy Handbook, corruption is only mentioned once, in a section on researchers’ conflicts of interest.
Even Paul Cairney’s blog, which is full of nuanced and insightful reflections on the limits of evidence in policy-making, fails to address corruption as a factor that affects EIP.
The only recent work we found that explicitly considers corruption was Vanesa Weyrauch’s Context Matters Framework, which was most recently included in Goldman and Pabari’s Using Evidence in Practice: Lessons from Africa). Here, corruption is included as a factor affecting evidence use. Corruption also features throughout the African Parliaments Volume 1: Evidence Systems for Governance and Development, edited by Linda Khumalo, Candice Morkel, Caitlin Blaser Mapitsa, Hermine Engel and Aisha Jore Ali.
As hinted above, the contribution of EIP to the fight against corruption is more commonly mentioned in studies on EIP.
For instance, while INASP’s Evidence in African Parliament report from 2016 doesn’t mention corruption, it does briefly address their contribution to transparency and accountability – which is certainly related.
A search using Google Scholar presented similar results: a clear focus on the contribution that EIP can make in the fight against corruption.
This potential contribution of EIP is important; therefore, it’s concerning that the effect of corruption on EIP appears to be broadly absent from the literature. Corruption is endemic in many parts of the world. It’s certainly present everywhere in my own country, Peru. It affects everyone – particularly the most vulnerable – and every institution, including EIP.
But if very little is said about corruption, how can we account for its influence on EIP? And how can we incorporate it into our design of EIP interventions?
Of course, the mere mention (or absence) of a word is not what really matters here. The word is simply an indicator that the issue it represents has been considered.
What can EIP do to fight corruption?
Let’s begin with the easy part, then. EIP practices can be actively employed to fight against corruption (including promoting transparency and accountability). Here are some ways that are frequently mentioned in the literature on EIP:
- Identifying corruption patterns by analysing data and evidence can help understand corruption networks and risk-prone sectors, guiding interventions.
- Establishing transparency and accountability by promoting open access to information and through the continuous monitoring and evaluation of anti-corruption efforts to foster public trust.
- Developing evidence-informed interventions and reforms that can make anti-corruption efforts more effective.
- Training programmes and public awareness campaigns based on evidence to enhance both official and public roles in fighting corruption.
- EIP can guide the adoption of global best practices and harmonise laws, fostering worldwide cooperation against corruption.
However, a discussion of these practices is more likely to be found in anti-corruption conferences; transparency, participation and accountability fora; or in political science symposia.
The impact of corruption on EIP
My argument is simple: corruption can cripple the very fabric of EIP, eventually distorting even the best-informed policy decisions, their implementation and their outcomes. Therefore, we must be explicit about corruption in the study of EIP and in the development of EIP interventions.
During my Google Scholar search, the one paper that I found on the effect of corruption on EIP argues that “the threat of corruption prevents evidence informed policy”. This article illustrates what we’re missing by not making the study of corruption in EIP more explicit.
This is consistent with conversations I’ve had with ministers in Peru on informing the design of a scientific advisory system. The threat of an overzealous public comptroller’s office, which tends to prosecute anyone who deviates from the approved plans and procedures, often stops noncorrupt policy-makers from making evidence-informed decisions. It forces them to follow policies and practices, even when the evidence doesn’t support them.
To prevent corruption at all costs, Peru’s anti-corruption watchdogs end up undermining noncorrupt policy-makers’ capacity to innovate and deliver better-informed policies.
Of course, the most obvious effects of corruption refer to how actual corrupt practices prevent the appropriate use of evidence to inform decisions, including corruption in research.
For example, during the Covid-19 pandemic, the head of the Chinese vaccine trials in Peru and many of the university’s authorities, researchers and leaders in the scientific community were found to have taken “courtesy” jabs during the trials. We eventually found out that the Minister of Health, many of her closest advisors and even Peru’s President and his family had been irregularly vaccinated.
Corruption can have systemic and localised effects on EIP. Here is an un-edited list.
System-wide and structural impacts
1. Policy formulation and implementation
- Political influence and bias: manipulation of evidence to favour specific interest groups.
- Regulatory capture: suppression of evidence that should lead to stricter or different regulations.
- Legislative impact: corrupt influences can skew legislation away from evidence- informed legislation and the public interest.
2. Economic and financial factors
- Funding and resource allocation: misallocation of government research funds to serve private interests.
- Procurement: corruption in procurement processes can lead to evidence being ignored in favour of biased selections.
3. Governance and transparency
- Transparency and accountability: systemic corruption can undermine openness in how evidence is gathered, stored, used and shared.
- International collaboration: corruption may hinder adherence to international evidence-informed standards, affecting global policy alignment.
4. Social impact
- Public trust: corruption erodes trust in how evidence is generated and used, affecting public support for policies and even for expertise and evidence.
Individual and localised Impacts
1. Research integrity
- Manipulation of data and research: altering findings to suit personal or organisational agendas.
- Academic misconduct: practices such as plagiarism or data fabrication undermine credibility and may lead to skewed decisions, which are to the detriment of the public interest.
2. Decision-making and implementation
- Bribery and coercion in decision-making: ignoring or misrepresenting evidence due to personal gain.
- Conflict of interest: individual biases may lead to the preference of certain policies despite the evidence – and also to certain evidence despite public consensus on its weakness or unreliability.
3. Community impact
- Local governance: corruption at the local level can affect how evidence is used in community-based policies.
In my view, the corrosive effect of corruption on EIP at both macro and micro levels underscores the need for greater research on these effects and for efforts to address them directly in any EIP intervention.
Why is corruption not a central aspect of EIP work?
Of course, as Emily Hayter reminds me: “the absence of published papers on corruption and EIP doesn’t mean practitioners/researchers don’t know about it. What gets discussed in real life partnerships with civil servants v what gets published in documents listed on Google Scholar are two very different things!”
Corruption is discussed (I know, I’ve been doing it myself), we just don’t see it written about.
I would like to offer four possible (oversimplified for the sake of argument) explanations for this – all of which are open to discussion.
1. The dominant EIP narrative originates in medicine and the natural sciences
It did not have to be like this, however.
When the Overseas Development Institute’s (ODI’s) Research and Policy in Development (RAPID) programme started working on EIP back in 2002, it was firmly grounded in political science and in the importance of the political context. The cases we looked at involved party-political interests, cases of state capture, messy social transformations and, yes, corruption.
Together with our partners in think tanks across the world, we championed the legitimacy and use of many forms of research (broadly defined as any systematic way of generating evidence) and evidence (which we took to mean different things to different professions and groups of people).
We recognised that evidence was only a small part of the decision-making process. Other factors were more important in explaining decisions: political interests, private interests, organisational culture, tradition, values and pragmatism.
Unfortunately, EIP has come to be dominated by a sub-set of economists and natural scientists who have not always been able to reach out to, engage with and embrace what the rest of the sciences have to offer.
2. Funding for EIP, the literature on it and EIP practice is still dominated by the Global North’s research and practice
While there is interesting and ground-breaking research and practice in the Global South – as is perfectly illustrated by the background readings for some of the sessions of the African Evidence Network’s Evidence 2023, randomised control trials (RCTs), nudge units, policy labs, What Works Centres and delivery units dominate our attention.
ONe thing they have in common is that they have all been popularised, celebrated and exported by northern researchers, organisations and governments.
This northern dominance is typical of fields that are not seen as ‘urgent’.+
While Latin American, African and Asian researchers were busy thinking about the best policies to reduce extreme poverty, we, at RAPID, could spend time reflecting on how those researchers did their work and what affected them. This was a privilege that we tried to share through the evidence-based policy in development network (EBPDN) and, now, I have tried to sustain through On Think Tanks.
In the United States (US), Canada and in most of Europe, ministers and civil servants have (and abide by) codes of behaviour, and the media is free and capable of monitoring their adherence to good practice. Also, public and private research centres and think tanks are relatively well-resourced and routinely supply decision-makers with information. And political parties also have research and programmatic capacity. Sure, there’s corruption – I’m not naïve! But having lived and worked in the UK for 15 years and now in Spain, I know that there’s a big difference between the corruption in these places and in Peru (and in much of the Global South).
This northern dominance creates at least two problems in the EIP field:
- Northern researchers don’t include corruption in the theories and analytical frameworks used to study and plan for EIP in the Global South. (It’s worth noting that the Context Matters framework was developed by an Argentinean and the editors of the volume on African parliament’s use of evidence are all African).
- As northerners, they may find it hard and uncomfortable to explicitly speak about corruption in the Global South.
As a Peruvian, I get away with a lot when it comes to speaking about corruption, racism, classism, violence against women and many other chronic social, economic and political problems. I don’t look or sound judgemental. How could I? All of Peru’s elected presidents since 1985 are in jail, about to go to jail or dead to avoid being sent to jail! I’m in no position to lecture anyone.
But can you picture a British researcher bringing up the subject of endemic corruption in India?
I guess this is another reason why we need to localise decision-making in policy research: so that we can have more open and candid discussions about what’s absolutely obvious to us.
3. EIP researchers see themselves as “insiders”
The field tends to describe EIP work as technocratic. EIP funders, researchers and practitioners want to work with researchers, communicators, policy-makers and funders to offer them technical (evidence-informed) advice and to help develop technical solutions.
Recently we have noted several calls for research proposals requiring formal partnerships between researchers and governments. And the increasing popularity of policy labs and other ‘embedded evidence’ solutions illustrates this preference for cosy relationships.
Imagine pitching the development of a policy lab in a Ministry of Transport and telling the minister that before discussing its design, you need to talk about high-level corruption in his office. It would not go down well.
4. In my opinion, efforts to build the field of EIP have partially backfired
Sure, we’ve built a field, but this field sometimes feels isolated and detached from other, more established and, in my view, more influential ones.
Our knowledge seems partial. Over the last few years, I’ve gained greater insights into EIP in Peru from political science essays (El páramo reformista by Eduardo Dargent and Repúblicas defaudradas by Alberto Vergara) than from studies from the EIP field itself.
In our study of knowledge translation (KT – another way of approaching EIP), we arrived at a similar conclusion. We argued that, to have an impact, efforts to support the effective translation of knowledge for use in policy need to work with and from within efforts to transform political parties and party systems, civil services, education systems and civil society at large. Without changes in these, neither an embedded policy lab, nor hundreds of newly trained impact evaluators, nor other KT interventions will succeed in making a difference.
If we haven’t engaged enough with corruption, it’s maybe because we’ve become too focused on ourselves as a field rather than on trying to embed EIP into other fields.