Skip to content

Posts tagged ‘Institute for Government’

What does impact mean for think tanks?

Think tanks in the UK, like academics and think tanks across the world, also need to think about impact. What is it and how can it be measured? Learn about what three leading think tanks do to address these questions

Read more

The Institute for Government: in Lima with Jill Rutter

Jill Rutter from the Institute for Government in the UK provides an insightful profile of this very original think tank. She discusses its origins, leadership, strategy, and challenges.

Read more

Making Policy Better Series: Good Policy, Bad Politics

A week ago I reported on an event co-organised by NESTA and the Institute for Government on evidence in policymaking. The second event in the series has taken place last Tuesday and the report has been written up: good policies, bad politics.

There are a number of interesting lessons, many reminiscent of work I’ve been involved in and ideas proposed in this blog in the past:

  •  it is difficult to distil any specific information about the amounts spent on evaluation
Often evaluations are carried out by the programmes themselves and so it is hard to assess total expenditure. More importantly these evaluations are not necessarily carried out appropriately.
  • Departmental culture also seems very important: the Department for Transport has long been renowned for the quality of its policy appraisal yet is less well regarded for its evaluation. Department for Work and Pensions, meanwhile, has a long-standing reputation for being strong on evaluation.  Some of the weaker analytical Departments needed to be clear and transparent on logic – and how they would quantify impacts so they could do proper cost-effectiveness evaluation which could be used as the basis for better informed decision making.
Culture matters a great deal. When pressure to demonstrate impact (something all too common in the Aid industry) take over, evidence is often used to support policy and not the other way around.
  • There were some real conflicts between timescales on the demand and supply sides… But there was usually some evidence available – from what had been tried in the past, to international experience which might be applicable, to early findings from ongoing studies
This is important. Often when working with researchers in developing policy influencing strategies they tend to forget that 1) they probably already have a view that can be communicated and that 2) theirs is unlikely to be the first research ever done on the issue. There is always something to communicate and think tanks need to think about this more carefully. Why not strengthen their communications teams to include some analytical capacity that may be able to communicate research done by others while giving researchers more space to get on with their own work?
  • The increasing availability of big data sets opened up new possibilities for more evidence driven decisions.
  • Most Ministers wanted to make good policy choices and do things that worked, to leave a legacy. But they were also under pressure to make decisions and not to risk things going wrong – and that often determined where they and their advisers focussed.
We should not always assume that policymakers are uninterested or unconcerned about evidence. There are rarely any cases where a policymaker has made a decision purposely uninformed by any evidence. What we need to ask is ‘whose evidence’ are they paying attention to?
  • But values and politics – where the public were on an issue – mattered too.  “Evidence” would never be the sole determinant of policy choices
Arguments not evidence will win the day!
  • More use could and should be made by departments of academic links. It is important to identify academics who are capable of interacting with policy makers; they are also often a lot cheaper than consultants!
DFID has tried this by hiring knowledge brokers and senior research fellows but maybe what they ought to be doing is hiring more experts for advisers posts -instead of the usual generalists. Other departments in the UK have done something similar. The idea is not bad: hire people who can facilitate contacts between academics and policymakers. Think tanks can play this role (but not if they operate as consultants). In general, however, policymakers like to rely on their networks and so it would be better practice to employ more experts than managers.
  • Excessive turnover in the civil service and consequential low levels of expertise among officials could also have an impact on the demand for evidence.
  • Part of the answer might lie in changing civil service accountabilities and incentives to make sure policy was based on robust evidence.

Making Policy Better: The Randomisation Revolution – How far can experiments lead to better policy?


An interesting event hosted by NESTA, the National Institute for Economic and Social Research, and the Institute for Government in the UK: Making Policy Better: The Randomisation Revolution – How far can experiments lead to better policy? It follows from some of the findings from the IfG’s Making Policy Better report, which called for:

  • A public statement by each department (secretary of state and permanent secretary) on how they will meet a set of new “policy fundamentals” – the building blocks of good policy. The minister and the civil service can then be held to account by, for instance, a departmental select committee, on how far they have met that commitment
  • A new responsibility for the permanent secretary to ensure that ‘good policy process’ has been followed – along the lines of their existing responsibility for value for money; Policy Directors in departments would be personally accountable to departmental select committees for the quality of ‘policy assessments’ published alongside new policies
  • A new Head of Policy Effectiveness in the Cabinet Office – a very senior official responsible for ensuring the quality of policy making in government, overseeing evaluations to make sure they are both independent and used and able to commission lessons learned exercises when things go wrong
  • New emphasis on both ministers and civil servants recognising the value each brings to the policy making process.

This may of course not be particularly relevant to many developing countries but the fact is that most countries -regardless of their level of development- have systems that are heavily influenced by these ideal ones. So it is relevant to know what is going on in the developed world.

This event’s core subject matter is well known to many onthinktank readers as it is all about randomised control trials. The main speaker is Dr. Rachel Glennerster, Executive Director of the Abdul Latif J-PAL at MIT:

With Esther Duflo and Michael Kremer, she is one of the prime movers behind the “randomisation revolution” that has transformed both the theory and practice of development economics over the last decade.

‘There has been an explosion in the use of randomised evaluations’, says Dr. Glennester. The main thing they have learned are specific lessons about the evaluations but there may be some more general lessons. Which are these?

  • Rigour matters: it helps to test our gut feelings.
  • We can look at a range of questions: this new wave of RCTs are looking at new kinds of questions that are being looked at such as adolescent empowerment, corruption, etc. For these things we need to get very objective measures:
  • It is a flexible tool: we can use elements of randomness that is consistent with ethical and logistical constraints.
  • There are technical lessons related to doing randomised evaluations (on a budget).

Her presentation is very interesting particularly relating to how to incorporate randomisation into a programme or project. One of the main disadvantages, she accepts, is that while it is good at providing a very specific answer for a very specific group, it is hard to assess the impact of an intervention on society in general. But there are ways to attempt to address this around the edges.

I think it comes as no surprise that I think this evidence based policy discourse has become a bit of a hype. I appreciate the power of rigorous research (including RCTs). And I appreciate their value in policy maker. I do not doubt this. But I continue to think that there is a danger that we end up believing that the only appropriate policies as those based on RCTs.

Policies and politics are inseparable. The very policies being tested are driven by politics (persona, community, local, national, or international). The trick is to find ways to bring different policy drivers together (no check-list here) to promote the most appropriate policy outcomes.

This is what think tanks (independent, free-thinking, well staffed, networked, close but not entirely controlled by politics or finances) can do better than any other type of organisation.

I worry that those proposing them think (do they?) that all policy debates can be solved by introducing rigorous evidence. Dr. Glennerster seems to suggest that policy debates in Africa and Latin America related to fertilisers can be solved by a RCT. RCTs can solve a policy problem and can improve the quality of the policy debate -for sure- we should be careful of claiming that it is the only input necessary.

Developing countries need systems to manage the RCTs, policymaking bodies to demand and use evidence, tertiary education systems to maintain the production of evidence, etc. Unless these system-wide issues are addressed, the J-PAL and 3ie data-bases of what works will have limited impact on policymaking -interesting as it may be,


Prospect Magazine Think Tank Awards 2011

Tonight was the Prospect Magazine Think Tank of the Year Awards 2011. Bronwen Maddox, Editor of Prospect presided and Vince Cable, MP, handed out the awards. The full list of winners and runner-ups has been published by Prospect but here are some of my impressions of the event.

First, I must confess that I think these are the kind of awards that every country ought to have. Even if there are 2 or 3 think tanks the awards are not just a great way to showcase their work but also to let others know about them and their potential for good.

Vince Cable had some interesting things to say. Think tanks, he said, should be “thinking the unthinkable” and “putting the unimaginable into practice”. These resonate with the call for “logical leaps of the mind” made at the CIGI Anniversary. Think tanks are great brokers and boundary workers, he added, and also provide an army of people who go on to become invaluable advisors in government and provide ideas for journalists to write about.

Jamie Walls, Vice President of Communications of Shell, who sponsor the event, contributed by saying that the work that think tanks do can shape business. Business is often forgotten as an audience by many think tanks.

The winners.

It is important to point out that the judges were driven by two main criteria:

  1. Originality of research
  2. Some evidence of impact in public policy
The judges also acknowledged that think tanks come in all shapes and forms (and sizes) and that therefore there are certain forces that play against the smaller and more focused ones. If they went by their visibility alone they would miss some who are often working in silence but not less important.
Something else that is worth mentioning is that throughout the presentations Bronwen described what were the topical issues that had focused the attention of the judges and guided the debate. So think tanks (and their studies) were not considered in a vacuum -it is not just good research that they were looking for; it also had to be relevant.
The first category, international (non-UK) think tank of the year, is a tricky one. I say this because it is difficult to see how the detailed analysis and discussion that is possible for the UK centres can happen in the event that many more applications are received. Nonetheless the fact that the Peterson Institute for International Economics got it (and now just Brookings) is encouraging. And Peterson got it because of its work on the global crisis. I do hope that more applications from developing countries are submitted in the future -they may not win but will certainly raise the bar and showcase some that may in fact be punching way above their weight.
Punching above its weight is the winner of the one to watch category. This category was disputed between well established think tanks going through a relaunch or renaissance and new ones. Mentioned were the Resolution Foundation, IPPR and 20/20 Health. The Institute for Economic Affairs was runner up but the Media Standards Trust came on top.  They were instrumental in much of the debate over the hacking scandal in the UK and calls to reform the Media Standards Commission. They run a number of important and highly innovative projects: Hacked Off (a campaign for a public inquiry into illegal information-gathering by the press and into related matters including the conduct of the police, politicians and mobile phone companies), Journalisted (a brilliant site that helps you find out more about journalist and their sources), The Orwell Prize (the most prestigious journalism award in the UK), and (that helps the public tell good journalism from articles that simply reprint what press releases say).
The Media Standards Trust has a staff of 4. Yes, 4.
The next award was for publication of the year. Chatham House and the Institute for Fiscal Studies received mentions but the winner was Reform for “Every teacher matters“.
The foreign affairs think tank of the year award was a contested category. The judges considered that the winning think tank ought to have addressed either the Arab Spring or the EU crisis, or both. It would have had to produce research as well as convened the right people. This proved harder to find in reality as many British think tanks lacked the people in the ground to address these issues. The joint-winners, in the end were RUSI (for its work on China and Strategic Defence Spending Review) and Chatham House (for its work on Yemen).
The Think Tank of the Year Award went to the National Institute for Economic and Social Research. Runners-up were The King’s Fund for their work on the NHS reform, and mentions were given to Policy Exchange, the Institute for Fiscal Studies, and the Resolution Foundation. In the words of Bronwen Maddox: “for doing Ed Ball’s job a bit better” (Ed Balls, by the way is the Shadow Chancellor -or the opposition’s economics spokesperson).
Its director, Jonathan Portes gave a few words and accepted that although they had been named think tank of the year they had yet to claim victory in terms of influencing policy. But not only this,  but the ideas that they were being credited for were in fact not new. They were 50 years old: Keynes’ and Hick’s. And more than half a century later they are still being repeated to policymakers. Influence, then, takes time, and think tanks can help along to keep ideas alive.
The award is great way to address the question of influence. The Institute for Government or ResPublica (previous winners) were not even mentioned; but nobody doubts their worth and their influence. The awards are a recognition but also an opportunity to learn about peers and one’s own trade. They offer a space in which the positive roles of think tanks can be discussed and rewarded.
Think tank funders in developing countries, take note.

Recommendations for good policymaking from the Institute for Government -how can think tanks help?

I came across the Institute for Government‘s Better Policy Making theme a couple of weeks ago. They have recently launched a report on how to improve policy making in Whitehall (short of the British Government) that is worth reading and paying attention to.

Michael Hallsworth’s and Jill Rutter’s report outlines a number of important lessons and recommendations that are perfectly relevant for think tanks across the world. More importantly, though, the report (and the theme) illustrates the value of studying the politics of policies -and I stand by my number one recommendation to any think tank: make your own business a subject of study.

And the report’s recommendation may also provide some ideas or entry points for think tanks. I’ll address some at the end.

Back to the report.

The study arrives at two main conclusions:

  1. Any reform needs to recognise the real world of policy making. It makes no sense to set out and promote an idealised policy making process with little relevance to what actually goes on in reality. Work by RAPID and the Africa Power and Politics Programme, for example, have been addressing these questions in the international development context.
  2. Policy making must be adaptive. Changes in government, the financial crisis, and other expected and unexpected shocks demand rapid responses from the policy making process to adapt to new challenges and opportunities. The Institute for Government calls it System Stewardship.

What does improving policy making imply? For the Institute for Government this is:

To develop a process that is both resilient to the realities of the policy making system and appropriate for meeting future challenges.

This resilient process is made up by a set of policy fundamentals that together constitute ‘good policy making’.

These fundamentals are affected by a number of components or factors interacting with each other: Structures, culture, controls, politics and skills. Therefore, to improve the fundamentals it is necessary to address these factors.

So back to the fundamentals. The Institute for Government considers that good policy has these characteristics:

  • Goals. Has the issue been adequately defined and properly framed? How will the policy achieve the high-level policy goals of the department – and the government as a whole (with reference to the departmental ‘vision’, as stated in business plans)?
  • Ideas. Has the policy process been informed by evidence that is high quality and up to date? Has account been taken of evaluations of previous policies? Has there been an opportunity or licence for innovative thinking? Have policy makers sought out and analysed ideas and experience from the ‘front line’, overseas and the devolved administrations?
  • Design. Have policy makers rigorously tested or assessed whether the policy design is realistic, involving implementers and/or end users? Have the policy makers addressed common implementation problems? Is the design resilient to adaptation by implementers?
  • External engagement. Have those affected by the policy been engaged in the process? Have policy makers identified and responded reasonably to their views?
  • Appraisal. Have the options been robustly assessed? Are they cost-effective over the appropriate time horizon? Are they resilient to changes in the external environment? Have the risks been identified and weighed fairly against potential benefits?
  • Roles and accountabilities. Have policy makers judged the appropriate level of central government involvement? Is it clear who is responsible for what, who will hold them to account, and how?
  • Feedback and evaluation. Is there a realistic plan for obtaining timely feedback on how the policy is being realised in practice? Does the policy allow for effective evaluation, even if central government is not doing it?

But what about the implementation of policy? Is a good policy on paper enough? Or should be wait and see if the policy is successful before we judge it as good? Is the proof in the pudding, so to speak? The report addresses this issue. Hallsworth and Rutter stress that policy realisation cannot be conceives as separate from policy design:

  • Policy formulation and implementation are not separate, but intrinsically linked
  • The potential outcomes of the policy itself may change significantly during implementation
  • Complexity in public service systems often means central government cannot directly control how these changes happen
  • The real world effects policies produce are often complex and unpredictable.
Hence, policy makers cannot just design and sit back. They must follow the life of the policy -and must be ready to adapt it and re-think it as it interacts with reality. This is what stewardship is all about -not directing but accompanying the process, intervening when necessary, and correcting the course as lessons are learned. To explain the roles of the Stewards, the authors present an interesting analogy from football, in relation to four roles: goals, rules, feedback and response.
  • Goals: The football manager sets an overall goal for the team: win the game. The manager does not stand on the touchline trying to direct every player’s movement.
  • Rules: The game has a set of basic rules: do not use hands, do not take the ball outside a set area. Apart from these basic rules, the players have freedom. The manager does not tell them to do exactly the same thing each time they receive the ball.
  • Feedback: The manager watches the game and sees how it is playing out in practice. The manager watches different parts of the game and tries to see how the team is working together overall.
  • Response: In response to the game, the manager may change the team’s tactics or formation; substitute one player for another; issue instructions to particular players; or give a motivational talk at half time. The manager tries different responses and watches for the effects that ensue.
So making it a reality demands that we address these fundamentals and interventions fall under 6 categories of factors or components:
  • Broadly speaking, policy making bodies must set out clear and public statements of policy making practice -code of conducts- to uphold these fundamentals.
  • In relation to structure: policy bodies (ministries) require a policy director to own and promote the quality of policy making within the organisation. Across the government, a head of policy profession (with the role of ensuring policy effectiveness) must be established (in the UK the Head of the Profession exists, but in other countries this is not the case).
  • In terms of controls: internal and external controls to uphold the quality of policy making must be developed and strengthened: new roles of internal auditors or publicly available sources of evidence and impact evaluations on new policies could be used.
  • In terms of politics and the roles of politicians: Good policies successfully combine the political (mobilising support and managing opposition, presenting a vision, setting strategic objectives) and the technocratic (evidence of what works, robust policy design, realistic implementation plans). Therefore, governments must make clear and public statements of its high-level policy goals to guide departments and ministries in finding the right balance. Ministers and officials also need to be given more and better guidelines related to finding the right balance between political and technocratic interests. To support this, ministers (politicians) need to be part of the policy making process from as early as possible to ensure that co-design and implementation are possible. And a key recommendation, worth setting out on its own is that:

Policy making should be seen as a more open and transparent activity. Analysis and evidence should, where possible, be produced and discussed in advance of option decisions to enable better external engagement with the problem. Ministers should be asked to make decisions from a shared analytic base. Interdepartmental discussions should focus on producing best decisions, not seeking lowest common denominator agreement to reconcile conflicting positions.

  • In terms of skills: Those playing the roles of Head of Policy Effectiveness and Heads of Policy must collaborate to develop and implement ongoing training courses. Most importantly, the civil service must recognise those who are experts in particular fields. Experts must the cherished and used to maintain a body of high quality research evidence in their subject area and networks of key contacts.External expertise, the more effective use of evaluations, and the need to address knowledge management concerns are all important additional recommendations.
  • In terms of culture: The report argues that policy making culture needs to adapt:

policy makers need to reconceive their role increasingly as one of creating the conditions for others to deal with policy problems using innovative and adaptive approaches. Incentives should be used to reward those who energetically search out experience and ideas, network, facilitate and understand the systems in which they operate. Policy making needs to be seen as a practical activity as well as an abstract one, and provide greater scope for policy makers to reflect on how they do things. Finally, in a complex and decentralised environment, expectations and perceptions of policy success need to change

What does this mean for think tanks?

Think tanks need to be equally aware of the realities of policymaking -it is useless if they work with an idealised process in mind. And this reality demands, in my view, that think tanks appeal to more than just research based evidence to develop their arguments: values, interests, experience, narratives, etc. are important sources of power in the policymaking business.

And of course, think tanks’ targets should be those codes of conduct, structures, controls, skills, politics and culture of policymaking -and not just the policies themselves.

on how to organise and present a think tank’s research

I had a very interesting conversation with Andrea Ordonez from Grupo FARO today. We were talking about how to organise the research programmes of the think tank and it occurred to us that there is often a tension between how research is organised internally and how it is presented -mainly through a website. When I was at ODI this was a constant struggle: hence the lists of programmes, themes and regions; never mind the long list of resources.

The reality of course is that work at ODI is not organised by any of these categories.

Internally, research needs to be organised to maximise quality and efficiency -it must help to manage, work across the organisation, win business, attract new staff, etc. Externally, it must be organised to influence  (inform or educate) its public. Two different audiences and objectives.

So it should be easier if one separate both -internal about management, external about influence- and recognise that there does not need to be an automatic link between both. All research and analysis is not worth being published -nor, in my opinion, should we publish different types of outputs next to each other as if they were comparable: books, journal articles, reports, opinion pieces and blogs all have their place.

Furthermore, external organisation of research (the presentation and communication of research) ought to present a coherent policy message. How else could someone make a decision?

So how to present research? I’ve been looking at some front pages (which is one way of presenting research outside of the organisation) and have found some approaches (but note that there are many overlaps):

Does anyone else have examples of these? Or any other? Maybe favourite think tank websites that may be presented as best practices?

Please send your recommendations.

on rankings

This month, Prospect Magazine announced the winners of the its Think Tank of the Year Award. The Institute for Government won the top spot, with the Policy Exchange claiming the prize for the best think tank publication of the year (“Making Housing Affordable,” by Alex Morton); the European Council on Foreign Relations taking the best Britain-based think tank dealing with non-British affairs award; and ResPublica as the “One to Watch.”

The panel included, a senior adviser to David Cameron, a members of the House of Lords, a think tank veteran and experienced journalists. Their verdict reflects a particular kind of deliberation that clearly attempts to  understand the complexity of the task of picking the ‘think tank’ of the year.

The judges described the Institute for Government as “indispensable,” praising its work on financial consolidation which helped improve the policymaking process leading up to the CSR. Andrew Adonis, the new head of the think tank, accepted the award but was at pains to point out that he deserved little of the credit.

They were also impressed by Alex Morton’s “fresh, thorough and ambitious set of proposals for radically overhauling housing and planning policy in this country.” Published in August, the Policy Exchange report has been widely discussed—and, said the judges, rightly so.

For the European Council on Foreign Relations, special credit was given to its power audits resulting in audits of EU-US and EU-UN relations and its work on international crisis management. And finally, the judges commended Phillip Blond’s achievement in creating ResPublica: a think tank with a distinctive agenda and set of values, which has also published a handful of deeply stimulating reports over the past 12 months.

Also this months many of us have received a few emails from James McGann urging us to respond to a survey to choose the top go to think tanks all over the world. The survey is a massive list of think tanks (down from an even longer one) for the US, the UK, Europe, Latin America, Africa and Asia; as well as for various policy or topic areas.

A number of dimensions are explored and the respondents are asked to assess the quality of their research, their communication competencies, they degree of influence, etc.

But how can we compare between think tanks in different countries? How can we judge a think tank in the US -endowed and free to speak its collective mind- to be better than one in Ecuador -competing for funds and mindful of what it says and when.

And how relevant is this comparison? Donors are not thinking: should I fund a think tank in the US or a think tank in Kenya. And a think tank in Kenya may look at Brookings for inspiration but cannot copy everything it does -nor should it compare it self with it. An index that compares a US based and a Kenya based think tank is really comparing the countries -and there are better indices for this.

The regional rankings do not make sense either: naturally, Brazilian and Argentinian think tanks dominate the list in Latin America -even when their focus is entirely domestic.

In the future, research funders should follow Prospect’s example and promote the setting up of these kind of nationally focused and led awards. Otherwise we risk promoting a popularity context –and the shallowness that comes with it.

Brains for hire: the think tank

Brains for hire: the thinktank | Politics | The Guardian.

Britain’s thinktank industry is the envy of the world. The US has institutions so large that they are like universities without students, but they have none of our flexibility and fast reactions. In western Europe, there are huge research agencies funded by public money, but again they are monolithic and don’t innovate. We have political entrepreneurship. We have tanks of people, all thinking; they need no mandate. They just think. And then their thoughts become public policy.

An interesting article by Zoe Williams in the Guardian yesterday. A good illustration of the vast differences between them; of what is wrong and right; and (check out the comment) the reactions think tanks (the idea of think tanks, in fact) produce in the general public.

I would argue that this type of analysis -even when light- is useful for any political system that aims to be more learned.


Get every new post delivered to your Inbox.

Join 7,062 other followers