Skip to content

The politics of the evidence based policy mantra

Andries Du Toit’s paper on the politic of research is one of the best studies on the links between research and policy that I have ever read. It is also one of the few coming from a developing country and written from that perspective -and in English which that will help in getting some of the points it makes across (if you can read spanish I suggest you have a look at this book that covers similar issues for Latin America).

This paper takes a different tack. It is not concerned with the theory of poverty measurement. Neither is it concerned with the ‘how to’ of ‘getting’ or ‘communicating’ the ‘evidence’. Rather, it seeks to look critically at some of the assumptions informing these concerns. The paper is part of a broader engagement with the politics of knowledge production in policy-oriented social science research. This engagement is based on a desire to better understand the relationship between two different worlds of practice — that of social science research and scholarship on the one hand, and that of social action, intervention and policymaking on the other. Against this background, this paper focuses on the habits of thought and agendas associated with the currently dominant way of thinking about these relationships — the discourse of ‘Evidence-Based Policymaking’ (EBP).

I read a draft version of the paper earlier in the year and have been meaning to review it for some time. And this is the line that got me interested:

This paper argues that while the desire to ensure that policy making is informed by social science may be laudable, the assumptions underlying these assertions about the role of evidence and science turn out to be dubious, and provide a poor guide to the challenges involved.

Emma Broadbent’s papers on the political economy of research uptake in Africa made many similar points. Not surprisingly, though, her ideas have not been rapidly picked up by the ‘sector’. They challenge the very assumptions that many consultants working on EBP and research communications use to keep their clients/donors happy with promises that it is, in fact, possible to influence policy. The paper is worth reading but here are some highlights and comments from me.

First, he is extremely clear in pointing out that:

  • The evidence based policy narrative is a normative and not a descriptive one: it is about what should be rather than about what really is; and
  • It is a deeply political one: the very idea that policy should be about ‘what works’ and not about values is a political statement -a strategy, in fact, to get rid of political debate and clear the way for unilateral reform. This is what the Blair government did in Britain, but also what the Fujimori government did in Peru. (And it is clearly described by researchers in the study of think tanks and political parties I edited a few years ago.)

There are several objections to this discourse then:

  • That it is potentially anti-democratic: in that it keeps the public out of discussions that, under the EBP discourse can only be had by ‘experts'; those who have the evidence (and not any kind of evidence). This is something seen over and over again in Latin America and Woodrow Wilson warned the American public about the threat to democracy posed by technocrats. In fact, many of the first foreign policy think tanks were set up precisely to keep democracy out of foreign policy!
  • It is also difficult to export and attempt to apply the EBP ideal to all contexts where the institutional infrastructure does not exist -nevermind the calls that ‘context matters’ when the implication of it is to still implement a programme designed in London or Washington and often by consultants with little interest in learning about the intricacies of the societies they seek to engineer.
  • But one of the biggest concerns is that EBP works (better) for clear and well-defined objectives: measurable outcomes. Development, however, is not about these alone but about a much broader challenge: society. Many Aid agencies have been praising Rwanda’s reduction of child mortality but they seem to forget the increase in political repression. The means cannot justify the ends, even if the ends are the MDGs and were reduced using randomised controlled trials.
  • And finally, EBP makes it difficult to think about policymaking in a more nuanced way. It limits, right from the start, what and how we study policymaking and the role of research and researchers. The questions EBP advocates ask in their case studies have to do with finding out why evidence is not more influential -as if it had the right to be so. In my 6 years at RAPID nobody (myself included) answered a question I posed from time to time: if we say that not enough evidence is used… then how much is enough? Once we free ourselves from the EBP discourse (or mantra) we can start to ask more interesting and exploratory questions: how does policymaking happen? Who plays what roles? Why? etc.

In fact, EBP is as ideological as the very approaches to policymaking it seeks to discredit. Du Toit provides an excellent illustration of these points using the case of South Africa. He concludes that the claims that we need EBP to move past ideology and guesswork is:

an exaggerated claim and misrepresents the issues. EBP discourse involves a narrow and technicist understanding of what is involved in policy making; it has a naïve empiricist view of the role of evidence in social science; and it misunderstands the importance of politically and ideologically loaded ‘policy narratives’ in policy change, even in situations where these policy debates do involve appeals to ‘evidence’ and research findings.

This is why I am sceptical of claims by many that the recognise that ‘other factors affect policymaking’. Sure, they do, but they wish they didn’t. I know because this is exactly how I used to work and what was behind the development of frameworks and tools like the RAPID Outcome Mapping Approach.

Du Toit does not dismiss the positive role that science (and the evidence it produces) can play in policymaking but instead argues that it must be incorporated into large arguments and policy narratives in which ideology (values) can and must play a role. This is at the core of my own critique to the recent obsession with randomised control trials and impact evaluations (which are useful in some cases), or to the characterisation of the policy space as one in which producers and users of research and linked by a separate group of intermediaries (they often employ terms like demand and supply in an effort to turn what ought to be  public debate into a marketplace).

His conclusion is a call to researchers (in think tanks and in academia) to stand their ground:

it is important to defend the critical independence of academic research, and not to allow a situation in which the need for ‘user uptake’ can cause researchers to abandon their integrity and independence, so that ‘evidence-based policymaking’ starts turning into ‘policy-based evidence making’.

The full paper can be found here and it is well worth a read: Working Paper 21: Making Sense of ‘Evidence’ – Notes on the Discursive Politics of Research and Pro-Poor Policy Making

And, PLAAS is organising a conference on this in November in Cape Town. I hope you’ll be able to make it: International symposium: The politics of poverty research: Learning from the practice of policy dialogue

About these ads
18 Comments Post a comment
  1. aademokun #

    I read Andries DuToit’s paper with interest a couple of weeks ago and have been meaning to respond to it. Thanks for your review Enrique and the points you raise. My main issue with the paper and the review is that very few people I discuss these issues with genuinely believe that policy should be based on evidence alone. Most people would argue that it should be informed by the best available research evidence. This may seem like semantic trickery but it is a very important point – that policy in the real world is informed by a lot of things including political expediency, time, lobbying, political agendas (in the UK, the intricacies of coalition politics) etc. and all those of us who are interested in this area are saying is that the best available research evidence should be one of those factors. So I would say I advocate for evidence informed policies. And yes it is a normative statement and not a descriptive statement but I don’t see how that devalues EIP as an idea.

    Just to comment on a few of your objections to the discourse and some questions of my own:
    That it is potentially anti-democratic: in that it keeps the public out of discussions that, under the EBP discourse can only be had by ‘experts’; – this is true if one talks about EBP but not if one talks about policy that is informed by evidence not based on it. In a recent paper by Kirsty Newman, Catherine Fisher and Louise Shaxson of (then INASP), IDS and ODI respectively, they use a definition of EIPM which I recognise and one which informs the work we try to do:

    ‘’We argue that evidence-informed policy is that which has considered a broad range of research evidence; evidence from citizens and other stakeholders; and evidence from practice and policy implementation as part of a process that considers other factors such as political realities and current public debates. We do not see it as policy that is exclusively based on research, or as being based on one set of findings. We accept that in some cases, research evidence may be considered and rejected; if rejection was based on understanding of the insights that the research offered then we would still consider any resulting policy to be evidence-informed.’’

    At another level the argument that the EBP discourse is anti-democratic fundamentally begs the question: Is policy making democratic (and should it be)? In a parliamentary democracy you elect your MPs or other representatives presumably from a party whose political manifestos and ideologies you share and they then go on and implement policies based on their manifesto pledges (or not). For some of these policies, wider consultation is sought for others it is not – at what stage is the policy making democratic and at what stage should it be? Indeed is that itself a normative idea?

    It is also difficult to export and attempt to apply the EBP ideal to all contexts where the institutional infrastructure does not exist …- Yes, it is difficult and the question of institutional infrastructure (including capacity to access and use research and motivation to use research) is an important challenge and one that should be understood before expecting that magically research will be taken up by those at these policy making institutions. Not every political reality or institution is able to implement evidence-informed policies but where they are able, what then? If the implication you raise is to ‘…still implement a programme designed in London or Washington and often by consultants with little interest in learning about the intricacies of the societies they seek to engineer.’ Is the criticism here about the lack of local understanding or interest by consultants from London or Washington? If so, this is not a problem of EIPM advocates per se but a larger problem (and an important one) of how development programmes are financed and implemented and the incentives of those involved in this sector.

    But one of the biggest concerns is that EBP works (better) for clear and well-defined objectives: measurable outcomes. Development, however, is not about these alone but about a much broader challenge: society – Again I think the semantic difference between ‘based’ and informed’ is important here. I don’t think there is this explicit assumption that evidence-informed policies are better for clear and well defined measurable outcomes. I think, and correct me if I am wrong, that you mean the assumption that a particular methodology, RCTs, are better for clear well defined outcomes. I would say that RCTs are not the only way to generate ‘evidence’ and to reiterate that evidence (in EIPM thinking) constitutes a lot more than that which is generated by one methodology but rather constitutes a range of considerations as discussed above.

    And finally, EBP makes it difficult to think about policymaking in a more nuanced way. It limits, right from the start, what and how we study policymaking and the role of research and researchers. I agree with you on the potential to limit the ways we engage with policy making. In fact most case studies want to find out: ‘how was MY research influential or not in policy making?’ (As if it has the right to do so). The question you pose about how much evidence is enough is an interesting one but I think the question should be: was the best available evidence considered?

    I recognise the implications here about who defines ‘best’? Available to whom? Considered by whom? And feel they are linked to the questions you pose: how does policymaking happen? Who plays what roles? Why? etc.

    I hope this conversation can help us move the discourse from whether policy should be based on evidence to the more interesting questions posed above and to a place where evidence is ‘’…incorporated into large arguments and policy narratives in which ideology (values) can and must play a role.’’

    October 17, 2012
  2. Hans Gutbrod #

    Interesting post, and I liked the comment above.

    Perhaps one problem, more broadly, is that people don’t acknowledge the normative part openly – if you were to do so, you could actually ground questions of “how much evidence is enough” in a broader framework.

    Sure, that kicks the debate across the fence into another complex field, but policy research would not be alone with acknowledging its broader embeddedness. It would follow medical or biological research, both of which have broadly accepted that questions of ethics need to be addressed.

    October 18, 2012
  3. Thanks for this post and for signposting us to Andries’ great paper. ‘Mantra’ is the right word for EBP. I am writing at the moment about the relative invisibility of unpaid care in development policy. When care’s invisibility is challenged the burden of proof is thrown back onto the challenger. System bias is sustained by EBP’s circular logic with an argument that runs if there were sound evidence, adequately communicated, then naturally decision-makers would take note and respond. That they have ignored the evidence means that it is flawed and/or badly communicated. Thus the discourse of evidence-based policy making nullifies the possibility of admitting to strategic ignorance nullifies the possibility of admitting to strategic ignorance of inconvenient truths. These are truths that would oblige a reassessment of policy priorities and budgets and might even challenge one’s understanding of how the world works.

    October 27, 2012
  4. williamevans1980 #

    Du Toit’s article is a useful one, and eloquently describes some of the genuine challenges to drafting and implementing evidence-based policy. However, the paper leaves the sense that du Toit is himself rather uncertain about the way ahead: the policy implications of his research, if you like. On the one hand, he notes that “social scientists have to give up the claim that their work can provide a privileged and incontestable ground for policymaking, situated beyond the messiness of ‘guesswork and ideology.'” But on the other, he also argues that “it is important to defend the critical independence of academic research, and not to allow a situation in which the need for ‘user uptake’ can cause researchers to abandon their integrity and independence, so that ‘evidence-based policymaking’ starts turning into ‘policy-based evidence making’.” Too often, the article verges on the very negativity about the value of evidence in policy making that it says it eschews.

    I suspect that the challenge is somewhat similar to that of ‘underground’ music artists. So long as their work remains untainted by more popular cultural influences, they are assured of maintaining their integrity and credibilty in the eyes (or ears) of a relatively limited audience or fan base: they are often destined to remaining ‘undiscovered’ or unpopular. But frequently, artists are tempted by the greater commercial successes of ‘the mainstream’. They engage in the sights and sounds of popular culture, and in so doing, their music becomes more commercially successful. They are, in a sense, rather like the researcher who has begun to produce findings more easily ‘usable’ in the policy environment. Musicians and researchers alike are prone to ‘capture’.

    Given this challenge, (and speaking as a ‘knowledge intermediary’ or Evidence Broker sitting between DFID’s Policy Department and their Research & Evidence Department) I do not see it as problematic that the ‘evidence based policy’ discourse is “normative” and, in some ways, “aspirational.” Striving to shape policies such that they are ‘more’ evidence-based (even if we sometimes do not succeed) requires that we set ourselves a standard. In aiming for ‘evidence-based policy’, we may be able to achieve ‘evidence-informed policy’.

    With regards ‘ends’ or ‘means’, I am typically more comfortable (as a civil servant) leaving the desired ‘ends’ to policy-makers, and attempting to focus my own efforts on what the evidence says about the ‘means’. In so doing, I would always recognise that RCTs, for example, are just one aspect of the diverse array of relevant evidence.

    Lastly, I note du Toit’s point that whilst evidence matters in policymaking “what matters is not the careful and meticulous description of one particular aspect of reality, but rather using evidence rhetorically to buttress arguments.” This is certainly true, in the same way as evidence is used by lawyers in a courtroom to buttress their arguments. We trust, nevertheless, that in a well-managed courtroom, occupied by a professional legal cadre, overseen by a legitimate judiciary, evidence is used more or less responsibly. The ‘risk’ of manipulation of evidence (in policy, no less than in the courtroom) is no reason not to strive for its greater usage.

    November 8, 2012
    • Hans Gutbrod #

      I think the courtroom analogy is great: in matters of greatest import, it is how we handle claims to truth, and the process of challenge, and being able to challenge, is critical for moving beyond “reasonable doubt”.

      And I also agree with the final point: “The ‘risk’ of manipulation of evidence (in policy, no less than in the courtroom) is no reason not to strive for its greater usage.”

      One aspect that is interesting is publication bias, and while I have not followed the debate sufficiently, you wonder whether RCTs would be less likely to be used selectively if they integrated an “all trials” approach, in which all trials that seek publication should be registered before they are undertaken, to reduce the likelihood of selective elevation/use. The debate on this in medicine seems to be quite ahead of discussion in social science & development.

      November 8, 2012
      • but we should avoid trying to make the debate in social science (politics after all) mirror or follow the path of the debate in medicine (which can be more ‘exact’). This is what Roger Martin was talking about in his presentation at CIGI last year: http://wp.me/pYCOD-iP

        November 8, 2012
      • Hans Gutbrod #

        Sorry, I was not clear… I think what’s interesting from medicine (or natural science) is how they ALSO deal with the limits of knowing, and the interface between knowledge and practice.

        Atul Gawande, for example, cites the remarkable claim that one study found that major discoveries in medicine can take on average more than 15 years to reach even half of Americans. That does inspire some questions about how practice can be accelerated, and what can be learned about that more broadly in research to policy (test case example: pre-school).

        The other striking bit I stumbled on recently was Ben Goldacre’s point about how even in medicine publication bias is a huge problem http://bit.ly/Ty87yV — but it is talked about more systematically than in development research, as far as I can make out.

        November 8, 2012
      • This is true. In this sense you are right. But the truly interesting thing is when evidence based medicine meets patients (and the politicians who represents them). And then, is peace of mind, even if this is not going to lead to a measurable health improvement, good enough?

        November 8, 2012

Trackbacks & Pingbacks

  1. What We Are Reading 26/10/2012 « Do No Harm
  2. I disagree that I disagree! « kirstyevidence
  3. I disagree that I disagree! There is room for more than one method of evidence in policymaking | Impact of Social Sciences
  4. I disagree that I disagree! There is room for more than one method of evidence in policymaking | British Politics and Policy at LSE
  5. Research uptake: what is it and can it be measured? | on think tanks
  6. Reading recommendations: research and policy overview | A New Think Net
  7. Evidence based policy, with a side of methodological caution | on think tanks
  8. Leaps of faith: the successful policy without evidence | Politics & Ideas: A Think Net
  9. Supporting think tanks series: Independent research within government | on think tanks
  10. The on think tanks interview: Fred Carden, Lead Technical Advisor, Indonesia KSI (Part 1) | on think tanks

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 5,340 other followers

%d bloggers like this: