Making Policy Better Series: Good Policy, Bad Politics

20 March 2012

A week ago I reported on an event co-organised by NESTA and the Institute for Government on evidence in policymaking. The second event in the series has taken place last Tuesday and the report has been written up: good policies, bad politics.

There are a number of interesting lessons, many reminiscent of work I’ve been involved in and ideas proposed in this blog in the past:

  •  it is difficult to distil any specific information about the amounts spent on evaluation

Often evaluations are carried out by the programmes themselves and so it is hard to assess total expenditure. More importantly these evaluations are not necessarily carried out appropriately.

  • Departmental culture also seems very important: the Department for Transport has long been renowned for the quality of its policy appraisal yet is less well regarded for its evaluation. Department for Work and Pensions, meanwhile, has a long-standing reputation for being strong on evaluation.  Some of the weaker analytical Departments needed to be clear and transparent on logic – and how they would quantify impacts so they could do proper cost-effectiveness evaluation which could be used as the basis for better informed decision making.

Culture matters a great deal. When pressure to demonstrate impact (something all too common in the Aid industry) take over, evidence is often used to support policy and not the other way around.

  • There were some real conflicts between timescales on the demand and supply sides… But there was usually some evidence available – from what had been tried in the past, to international experience which might be applicable, to early findings from ongoing studies

This is important. Often when working with researchers in developing policy influencing strategies they tend to forget that 1) they probably already have a view that can be communicated and that 2) theirs is unlikely to be the first research ever done on the issue. There is always something to communicate and think tanks need to think about this more carefully. Why not strengthen their communications teams to include some analytical capacity that may be able to communicate research done by others while giving researchers more space to get on with their own work?

  • The increasing availability of big data sets opened up new possibilities for more evidence driven decisions.
  • Most Ministers wanted to make good policy choices and do things that worked, to leave a legacy. But they were also under pressure to make decisions and not to risk things going wrong – and that often determined where they and their advisers focussed.

We should not always assume that policymakers are uninterested or unconcerned about evidence. There are rarely any cases where a policymaker has made a decision purposely uninformed by any evidence. What we need to ask is ‘whose evidence’ are they paying attention to?

  • But values and politics – where the public were on an issue – mattered too.  “Evidence” would never be the sole determinant of policy choices

Arguments not evidence will win the day!

  • More use could and should be made by departments of academic links. It is important to identify academics who are capable of interacting with policy makers; they are also often a lot cheaper than consultants!

DFID has tried this by hiring knowledge brokers and senior research fellows but maybe what they ought to be doing is hiring more experts for advisers posts -instead of the usual generalists. Other departments in the UK have done something similar. The idea is not bad: hire people who can facilitate contacts between academics and policymakers. Think tanks can play this role (but not if they operate as consultants). In general, however, policymakers like to rely on their networks and so it would be better practice to employ more experts than managers.

  • Excessive turnover in the civil service and consequential low levels of expertise among officials could also have an impact on the demand for evidence.
  • Part of the answer might lie in changing civil service accountabilities and incentives to make sure policy was based on robust evidence.