Evidence based policy, with a side of methodological caution

1 May 2013

About a month ago Michael Bassey of the LSE British Politics and Policy blog wrote about the UK Cabinet Office’s announcement that they were implementing a new initiative that would build on existing evidence-based policy making to guide decision making on  £200 billion of public spending. They would be working with a network of centres in order to give access to  comprehensive evidence to national government and local public services.

Academics and scholars should be happy that a government is actively trying to include evidence in its policymaking process, especially when taking into account that there have been strong efforts to achieve this very situation:

The Campaign for Social Science of the Academy for the Social Sciences has for the past two years been trying to ‘inform and influence public policy’ with an impressive series of booklets (7 so far) entitled Making the Case for the Social Sciences.

Nevertheless, all of this excitement comes with a side of methodological caution. Bassey makes reference to a paper he wrote in 2001 for the Oxford Review of Education, titled “A Solution to the Problem of Generalisation in Educational Research: fuzzy prediction”. In this paper, he concluded that social science research, because it is social, embraces a multitude of variables, which makes it impossible to generalise. What is possible is to invoke the principle of “fuzziness” and thus develop the idea of fuzzy generalisation: the social scientist can then say “x in y circumstances may result in z”.

When working with policy makers, researchers have to tell them what may work instead of what will work, due to this fuzzy generalisation. Bassey calls this way of informing “best-estimate-of-trustiworthiness”, or BET:

Making such a BET takes the researcher beyond the empirical evidence of a research project and into the realm of tacit and professional knowledge. It requires the courage to ‘put one’s head above the parapet’. But it should be of value to the politician and help ensure that new policies are less likely to trample on the minorities for whom “z” does not apply.

Social policy, after all, is never a “one size fits all” method, which makes the BET a wise choice. Others take it further: Andrew Pollard, assistant director at the Institute of Education, University of London, believes that policy should not be evidence-based but evidence-informed. He also gives a final note of caution: while fuzziness might apply to a research conclusion, it can never apply to the research methodology. This should always be made very clear to policy makers.

Bassey’s view on evidence based policy can be an interesting addition to what has been already criticised about the process of research uptake. For instance, Andries du Toit’s paper on the politics of research looks critically at the assumptions made by researchers on the link between research and policy, and suggests that evidence is not always as well received as researchers like to think. More importantly, researchers’ zeal in having evidence used in policy making can be potentially harmful – the idea that policy should be about what “works” and leave everything else aside can lead to the elimination of political debate. The concept of fuzziness and BET can be used here to prevent the latter.

Additionally, ideas are not always easy to convey: presenting evidence to your audience, whether it be politicians, policymakers or a broader audience, might not always have the impact expected by researchers. How one approaches and uses evidence can be influenced by quite unscientific factors, as Emma Broadbendt’s paper on the political economic of research uptake in Africa points out: objectives, expectations, understandings, motivations and commitment regarding evidence also come into play.