Skip to content

Posts tagged ‘evaluation’

Building sustainable think tanks is a long-run endeavour: the future of ACBF

This post outlines the Africa Capacity Building Foundation's new plans. It appears that a stronger focus on results and deeper engagement with 'senior demand side' players is on the cards. Also interesting, especially for a funder used to helping set up new think tanks, a focus on mature ones. Peter da Costa offers this as a closing post on a series dealing with a recent think tank summit in Africa.

Read more

Find Policy: “Ideas Worth Searching”

Find Policy is a new website offering targeted search. See what the leading think tanks are saying about specific issues. Hans Gutbrod describes why such faster and more focused search can be useful, and why it will can help less prominent institutions.

Read more

The external evaluation of the Think Tank Initiative: “what we’re learning, and how we’re responding to those lessons”

In this post, Peter Taylor, Program Manager of Think Tank Initiative, outlines the main findings of the TTI's first phase evaluation. The evaluators also identified a number of lessons and recommendations for the second phase: sharing, learning and collaboration are among them.

Read more

Who is responsible for a think tank’s influence?

Attempts to measure influence miss out on two fundamental questions related to current efforts and ideas focused on monitoring and evaluating think tanks: who is responsible for a think tank's influence? and what are they actually responsible for? Attempting to answer them led to two further issues: a question: when should think tanks claim influence? and, a conclusion: any claims of influence are political acts; they are claims of power over others.

Read more

Research uptake: what is it and can it be measured?

Is research uptake measurable? Can it be planned? Or is it just luck? This blog post reviews a number of issues that ought to be considered when trying to measure it. The post argues that instead of measuring it, we should attempt to understand it.

Read more

Think Tank Initiative 2012 Exchange: Sustaining quality in social policy research – lessons learned from institutional approaches

This is the third set of videos from the TTI Exchange sessions. This panel was about reviewing the lessons learned from institutional approaches on sustaining quality in social policy research.

First up was Rajeev Bhargava of the Centre for the Study of Developing Societies (CSDS).

Rajeev points out several factors that help nurture quality research. Some of these are creating a milieu that acknowledges that academics “get it right”: they grasp what is going on and strive for internal goods such as truth and plausibility. However, research also produces external goods, such as power, and think tanks can be lured by these goods, which is why they should not become the aim of research practice. He also emphasizes evidence based research and the importance of pluralism in any good institution.

Watch Rajeev’s talk:

Next was Mahmood Mamdani of the Makerere Institute of Social Research

Mahmood points out that while think tanks’ goal is to generate public debate on issues of public policy, researchers cannot assume that there is a causal relationship between policy makers and researchers. Academics must also not forget that the relationship between think tanks and policy makers stems from the understanding that think tanks are autonomous, and that the public agenda is not defined by the existing scope of public policy.

Watch Mahmood’s talk: 

Sukhadeo Thorat of the Indian Council of Social Science Research (ICSSR)

Sukhadeo stresses that sustainable policy demands that all policy suggestions are based on a realistic understanding of the issue at hand. Research is understanding, and policy is action based on understanding. This is why methodology is also an extremely important factor in research. He also states that ideal solutions may not be politically acceptable, so research should always try to offer more than one solution.

Watch Sukhadeo’s talk: 

And finally, Roxana Barrantes of the Instituto de Estudios Peruanos (IEP) 

Roxana first gives an account of the history of research institutions in Latin America, particularly in Peru, to point out that for strong academic contribution to knowledge, you need strong academic leadership. She mentions a couple of well known Latin American intellectuals and their impact on research and policy, and also emphasizes the importance of attracting young talent from universities.

Watch Roxana’s talk: 

Even the mighty get it wrong: putting think tanks in their place

This article on how Western think-tanks got it wrong on the Arab Spring got me thinking about recent discussions about think tanks’ impact. A great deal of emphasis is placed on whether think tanks should measure their influence -and even on the individual tools that think tanks sometimes use to communicate their work. In the last few weeks I have been asked to review a couple of papers on monitoring and evaluating think tanks policy influence and a couple more M&E framework proposals. This focus on policy influence often:

  • Forgets about all the other positive (and, at least, neutral) contributions think tanks can make to society (educate, provide oversight, improve political debate, break the consensus, strengthen parties, help fund research, etc.);
  • Overestimates the influence think tanks have; and
  • Tends to assume that think tanks are always right about what they say.

The fact is that think tanks play very small roles even in the most think tank savvy societies and quite often they do not know what they are talking about. This is a quote referring to Anthony Seldon that Emma Broadbent wrote for a study on think tanks in the UK:

Rohrer (2008) quotes Prof Anthony Seldon, editor of Ideas and Think Tanks in Contemporary Britain and biographer of Tony Blair, who believes their influence is overstated. Of the three major prime ministerial periods of post-war Britain, the Attlee, Thatcher and Blair eras, he believes only Atlee was significantly influenced by think tanks. For Blair, he says, “What is striking, as Blair’s biographer, is how little impact they [think tanks] made. You see hardly any influence on policy at all. It is very hard to see how ideas get into the system.” Seldon argues that “As the numbers of think tanks have accelerated their influence has declined. Influence comes from people who break off them and come into government.”

Last year, Prospect Magazine’s Annual Think Tank Awards had no real winner for the foreign policy category. According to the judges the winner would have had to predict the European financial crisis and the Arab Spring. None did. So not only are think tanks not as influential as sometimes we’d like to think they are, but they can also get it wrong; even where the resources and opportunities are as readily accessible as they are in the UK.

This is important for two reasons:

  1. According to the Prospect judges and to The National’s article think tanks play an important role not only in influencing policy directly (by telling governments what to do) but also by informing decision makers of things they may not be aware of. Think tanks, according to both political publications, fulfil a key function often overlooked by those too focused on tangible indicators of impact: enlightenment, information, inspiration… When attempting to assess think tanks contributions therefore we must pay attention to this more indirect yet crucial aspect of their work.
  2. Often, even the best think tanks, with all their resources and top academics -even with local offices and programmes, get it wrong or miss key processes and developments entirely. This means that we should not simply assume that everything a think tank says should influence policy. This would be quite dangerous. What we should be looking for is evidence that their recommendations have informed the public debate and the decisions made by those with the legitimate power to make them.

We should genuinely worry when donors put pressure on their grantees/sub-contractors to influence policy (and show them evidence of their influence). What they should be looking for is more informed policymaking and not just cases of policy influence.

Tracking research impact through Twitter

Cameron Neylon of the LSE blog Impact of Social Sciences has recently written a piece on the possibility of tracking research impact via Twitter. Monitoring the way how research influences policy and how professionals use the studies they’ve read on their day-to-day practice has proven to be difficult for a number of reasons: professionals don’t usually write new research papers citing the work they’ve used as sources; identifying said sources can be tricky because they may be several steps behind from the new study; and sometimes researchers aren’t even aware of their work being used because they are so far removed from its practical application.

Neylon mentions an example of a research article on HIV status, domestic violence and rape, reaching a practitioner community, which he found via Altmetric, a web app that helps track conversations around scientific articles online. The article was tweeted by several accounts, particularly by two South African support and advocacy groups. This example shows that it is possible to identify where research is being discussed and by whom.

It is possible, however, to go further than this:

More recently I’ve shown some other examples of heavily tweeted papers that relate to work funded by cancer charities. In one of those talks I made the throw away comment “You’ve always struggled to see whether practitioners actually use your research…and there are a lot of nurses on Twitter”. I hadn’t really followed that up until yesterday when I asked on twitter about research into the use of social media by nurses and was rapidly put in touch with a range of experts on the subject (remind me, how did we ask speculative research questions before Twitter?) . So the question I’m interested in probing is whether the application of research by nurses is something that can be tracked using links shared on Twitter as a proxy?

The hypothesis is that the links shared by nurses and their online community via Twitter are a viable proxy of a portion of the impact of certain research on clinical practice. This, of course, could be used for other professions as well, by monitoring what research is tweeted, how much it is retweeted and how often.

The Impact of Social Sciences blog also has a guide to using Twitter in university research, teaching, and impact activities.

An RCT debate: Abhijit Banerjee versus Angus Deaton

I have taken some time to address the value (or lack of value) of RCTs on this blog. I am not against them in principle, but I consider that they must be used with care and when necessary and that therefore some of the claims being made (that they can tell us what works, for instance) need to be taken with a pinch of salt. Or not at all.

This very interesting debate between Abhijit Banerjee and Angus Deaton for NYU’s Development Research Institute is worth reading (and viewing): Deaton v Banerjee on RTCs.

Banerjee’s main argument is that RCTs force researchers to be more rigorous:

Just thinking about designing an RCT forces researchers to grapple with causality, responded Banerjee. And Angry Birds-style trial and error isn’t a realistic way to create policy

However, trial and error is in fact a realistic way to create policy. In the real world, where information is not complete, trial and error is the only way forward. And what is and RCT but a test of a trial -one which may very well end in error. The fact is that most RCTs are developed based on a theory or a hunch -otherwise there would be no need to test it using an RCT. Here, think tanks play a critical role: they can be quick to make suggestions that may go beyond what the evidence allows, point at the errors and recommend adjustments to the policy helping steer it in the right (if there is such a thing) direction.

Deacon goes further and argues that RCTs could tell us what worked but not what will work:

Angus Deaton responded that RCTs are of limited value since they focus on very small interventions that by definition only work in certain contexts. It’s like designing a better lawnmower—and who wouldn’t want that? —unless you’re in a country with no grass, or where the government dumps waste on your lawn. RCTs can help to design a perfect program for a specific context, but there’s no guarantee it will work in any other context.

I liked this particular analogy. But an even better is this one:

RCTs may identify a causal connection in one situation, but the cause might be specific to that trial and not a general principle. In a Rube Goldberg machine, flying a kite sharpens a pencil, but kite flying does not normally cause pencil sharpening.

In other words, may be a useful method to improve a particular intervention but not to take that intervention beyond its original scope.

It is true that RCTs can lead to more rigorous analysis. However, this should not lead to claims that this is therefore better than any other kind of analysis. When I worked at the Universidad del Pacifico in Peru in the early 2000, we calculated that certain food and nutrition programmes had leakage levels of over 80%. We did this by simply comparing the people who received the food from these programmes against those who were supposed to, according to the programmes’ design. This took about 30 minutes to calculate using the national household survey. It did not take too long to find out, by means of quantitative research, that this leakage was due to the way the programmes were targeted and implemented (relying on grassroots in many cases). This, coupled with the fact that under 5 malnutrition had barely been reduced by a single percentage point in over 4 years (and US$1 billion spent), led to a very convincing argument in favour of a reform of food and nutrition programmes. (We also calculated overlapping levels between programmes which led to suggestions of which programmes to cut.)

This is not to say that an RCT would not be useful in designing food and nutrition programmes, but to say what was working and not working we did not need to spend hundreds of thousands of dollars. A few hours on STATA and some good old research (interviews, focused groups, site visits, and even testing and tasting the food provided) were sufficient.

While Banerjee is right in that RCTs force researchers to ask themselves lots of questions regarding causality so does any good research.

Of lately, I have come across some funders requesting an RCT approach to assessing the impact or influence of the think tanks they fund (or the impact of their support on those think tanks). Some organisations, keen to win the contracts to undertake these evaluations, appear eager to please them instead of reflecting on what is possible or not. I think this is misleading and a unnecessary waste of resources. Just like Deaton suggests, it is impossible to create randomness or causality: two crucial components of any RCT. There are suggestions that quasi-experimental RCTs can be developed instead. Quasi-experiments do away with the randomisation aspect of the RCT, but cannot do away with the need for causality. In fact, it makes reaching conclusions about causality even harder.

Furthermore, there aren’t sufficient cases to study. A few think tanks per country, all different from each other, targeting different policy communities, by different means, and in different circumstances cannot be appropriately pooled together nor controlled.

But most importantly, a full-on experimental or quasi-experimental RCT won’t offer each think tank anything of use to them and their unique circumstances. And it seems ironic that an effort to support think tanks would be side-tracked by an evaluation that offers them little of value.

Prospect magazine Think Tank of the Year Awards 2012

Prospect magazine is looking for entries for their think tanks annual award. This year they have extended their reach to North America and Europe with two new categories.

Entries close on the 15th June 2012 and the award’s ceremony will take place at the Royal Society on 10th July 2012.

The categories are:

Global

  • Think Tank of the Year
  • Publication of the Year

UK

  • UK Think Tank of the Year
  • Economic & Financial
  • Social Policy
  • Energy & the Environment
  • International Affairs
  • One to Watch

North America

  • North American Think Tank of the Year

Europe excluding the UK

  • European Think Tank of the Year

In the past I have made it clear that I prefer this award to the think tank index produced by James McGann. I am keen to see how the new categories work out. The North American think tank category, I assume, includes Canada, the United States of America, and Mexico; but I have a feeling that Mexican think tanks will be left out. The Europe (minus UK) category is easier to define.

Unlike the McGann ranking, the Prospect awards are based on submissions by the organisations (or interested parties), there is a cost to entry (the equivalent of US$80), and the winners are decided by a panel that discusses each submission in some detail. This is not an objective award, but I do not think it claims to be. And that is what I like about it.

Still, I’d rather it was focused on the UK; even the global, North American, and European categories could be UK focused: judging the influence that those think tanks have had on the UK or on issues that affect the UK.

Follow

Get every new post delivered to your Inbox.

Join 4,798 other followers