Think tank rankings and awards: rigged, futile, or useful

28 July 2014

Prospect Magazine announced the winners of the 2014 Prospect Think Tank of the Year Award two weeks ago. I have written about my preference for this kind of award in the past, so it should not come as a surprise that I have been promoting a Peruvian version with a local magazine there: Premio PODER.

The announcement of the winners sparked a bit of a transatlantic discussion about the validity of the award. Think Tank Watched asked: Are Prospect Magazine’s Think Tank of the Year Awards Rigged? (they do not think it was rigged, by the way, and no accusation has been made) and later: Think Tank Awards & Rankings: A Futile Exercise?

Both are valid questions. How come Brookings, top of the UPENN ranking (every year) was not even shortlisted. Heritage? Cato? AEI? None of the big American think tanks, with the exception of Carnegie, made it to the short list:

The Brookings Institution, although not on the shortlist,was cited this year for its especially strong work on the Syria crisis and was described by one judge as “huge, but nimble.” The American Enterprise Institute was also noted for its significance, especially in its attempts to fashion a more moderate policy offering for the Republican Party, while the Centre [sic] for American Progress drew plaudits for its work on the left of the US political spectrum.

In discussions over email with Think Tank Watch I argued that in my opinion the award is not necessarily biased against large think tank, as the article asked. Rather, it was not biased against smaller think tanks. And this is a quality that makes it interesting for the majority of think tanks.

The second questions is more interesting and important: What is the point of these rankings and awards? Are they worth the effort?

Since I have already expressed by preference in the world of rankings and awards I may as well try to put forward an argument for national think tank awards modelled after the Prospect Magazine Think Tank of the Year Award. I will acknowledge some of its weaknesses and try to outline its strengths.

Discussions like these are very useful. As we attempt to develop the model for Peru we are keen to learn from our experience and that of others. And questions like these are the right way forward. What I hope will be clear by the end of this post is that the subjective, value-ladden, and unscientific model promoted by Prospect Magazine is, by its very nature, perfectly able to address these questions. It can test new ideas, adopt the ones that work, and adapt as it grows and develops.

I should acknowledge, finally, that I consider the rankings and awards to be a means to something else. So, in a way, they are, like Think Tank Watch says, futile only if they do not achieve anything else. And the question I ask is: are they helping think tanks to be better?

Weaknesses

In the spirit of fairness, and given that I have (and probably will) criticised the UPENN ranking, I should begin with some criticism of the Prospect Awards model (the same model used for the Premio PODER).

The best of all or the best of those who applied? The first valid critique is that the award only considers the think tanks that applied for the award. So, to say that it is the best in the UK or Peru is to assume that all think tanks applied. While it is likely that the vast majority of think tanks in the UK have applied (it is, after all, the 14th edition of the award), not all think tanks in Peru applied last year (the first time the award was held). But, this is something that could be said of all awards. After all, are all films produced considered for the Oscar?

I am confident that Prospect’s judges’ choices for the UK political scene come close to a ‘best of all UK think tanks’ decision but not so sure this is the case for the US and the European categories.

For Europe, the award could focus on the EU policy community. Think tanks are political and to judge them one must tie into account their political space. This is what Prospect does rather well in the UK by linking the award to political relevance. If the European category focuses on Europe-wide or EU politics then it is likely to reach the same level of ‘representativeness’ as it has in the UK.

For the US category, however, the challenge will remain. How can the Prospect award judge all the think tanks in the US in a single category? It could follow my European category suggestion and focus the award on the best US think tanks dealing with the UK or Europe, for instance; but this may be no more than reducing the US applications to Foreign Affairs only.

It could add categories to mimic the UK ones; but this could easily be met with a US based publication coming up with its own Award. And just as Prospect can claim ‘ownership’ of the UK category, this new publication would claim the US for itself.

“It is too subjective”. This is a claim I have heard before. Sure, as Jeff Knezovich showed in his excellent data visualisation exercise the winners of the Prospect Award were not necessarily the ones with most publications, Facebook or Twitter followers, or Google ranking. CGD’s work on measuring the public profile of think tanks showed the same thing in the case of the US. Numbers matter, sure, but it’s what is behind them that matters more.

In Jeff’s visualisation, ODI and IDS (IDS was not shortlisted) come on top for:

  • Pages Indexed on Google -by a landslide
  • 2014 publications: ODI is first, IFS second, and IDS third
  • On Twitter followers ODI and IDS are in the top 6

But they are nowhere to be found for media mentions. And for an award like Prospect’s this matter more. Media mentions are closely associated with the think tanks’ political relevance. Did their ideas matter and inform the public debate? ODI and IDS, both focusing on international development policy, having a global audience, and being rather large organisations relative to other think tanks in the UK  are likely to do much better on the three mentioned above. But will always struggle, certainly in the UK political space, to get the same attention that the IFS, IPPR and other winners receive from the media and politicians.

And the problem with a subjective award is that good important work, if unpopular, may go unnoticed and unrewarded.

Strengths -or rather the weaknesses of others

The strengths of this model are best illustrated in comparison with alternatives. Let’s take three other models (I am not including David Roodman and Julia Clark’s effort nor WonkComms’ Top Trumps because they are not intended to reward think tanks, just to make a point -both very serious ones, in their own way).

The first one is, of course the UPENN ranking of think tanks. This ranking has been growing over the years but is yet to offer any meaningful contrition to our understanding of think tanks and what makes them valuable. The raking has, for sure, raised the profile of think tanks in many countries where the label was unknown. But the response from think tanks has not been the most helpful for their own interests.

There is ample criticism of the methodology; and not just by think tanks that didn’t do well enough. David Roodman at CGD has addressed the ranking’s shortcoming in the past, and so have Jan Trevisan, at the International Centre for Climate Governance, and Christian Seiler and Klaus Wohlrabe published their own, very well researched, critique of the 2009 think tank index.

In essence, the ranking fails on a number of factors, including:

  • Transparency: The judges are not known, there is only a reference to their number and backgrounds. But we do not know what their ideologies are, what professional formation and biases they may have, which political communities they be log to, etc.
  • Accuracy: Among the organisations ranked there are a number that are not think tanks: government agencies, foundations, networks, and consultancies are often included among the top think tanks in the world.
  • Political irrelevance: Regional and global rankings mean little or nothing to national political spaces. So what if Brookings in the top think tank in the world? IFS has had more influence in the UK, CIPPEC in Argentina, GRADE in Peru, FUSADES in El Salvador, PMRC in Zambia, SMERU in Indonesia, etc.
  • Absence of lessons: So what? Think tanks that make it to the top 20 in their region do not know why. Neither do the ones who didn’t. The ranking says that the judges take into account a long list of criteria but it is impossible to know if all have in fact used them. It is not possible to verify this; nor are these criteria used to report. On the other hand, the ranking is accompanied by an extensive report produced by James McGann, its author. As I’ve argued before, it is a shame that a greater effort is not given to the analysis of think tanks and the data (which could very well be public since it is gathered by requesting information from the think tanks themselves).
  • Negative think tank behaviour: It does not take long for someone who follows think tanks to notice the negative effect this ranking has on think tanks and think tank communities. Directors who ask their friends to vote for them, think tank leaderships who decide to focus their attention on communications and international networks before research and local networks, unhealthy competition between local think tanks, etc. UPENN has been organising regional events lately. It seeks the top rated think tanks in the region to host a event. But all accounts suggest that these are more about the ranking than the think tanks. See reports from Latin America and Africa.
  • What is it for? Some possibilities include: to raise the visibility of think tanks, to develop a data base that can be sold (it is not free), to draw attention to the Think Tanks and Civil Societies Program and UPENN, or to encourage competition between think tanks. Learning and community development, unfortunately, are not.

The second alternative model is the RePEc ‘awards’. RePEc is an effort of “hundreds of volunteers in 82 countries to enhance the dissemination of research in Economics and related sciences”. It hosts a number of services, including IDEAS, an open bibliographic database on economics and EDIRC, a data base of research institutions. The size of the data base is impressive:

Currently 12,966 institutions in 231 countries and territories are listed

IDEAS uses its database to develop a ranking and this year’s has been recently published. The Top 10 offer an interesting view:

  1. National Bureau of Economic Research (NBER)
  2. Centre for Economic Policy Research (CEPR)
  3. Institute for the Study of Labor (IZA)
  4. Brookings Institution
  5. ifo Leibniz-Institut für Wirtschaftsforschung an der Universität München e.V.
  6. DIW Berlin
  7. Peterson Institute for International Economics (PIIE)
  8. Resources for the Future (RFF)
  9. Institutet för Näringslivsforskning (IFN)
  10. Research Institute of Economy, Trade and Industry (RIETI)

First, of all, Brookings is not first (as in the UPENN ranking). Second, there are quite a few European organisations in the list. So what is going on? The rankings’ methodology may offer some clues, but let’s use the same criteria:

  • On Transparency: The RePEc ranking is pretty transparent. Basically, it uses what it can count, automatically. The judge is, a computer, and it basis its decision on 31 rankings based on publications by an author (who is then associated to his or her organisation or organisations). Basically: number of publications, citations and downloads. Unlike the UPENN ranking we can see the entire list of think tanks/centres and authors. The data, in other words, is there for anyone who wants to check.
  • On Accuracy: Of course, since it is a computer doing all the calculations, one cannot expect it will get all the think tanks right. A quick look at the Peruvian centres (I am Peruvian so always check that first) points out at a number of organisations that do not exist, cannot be truly described as think tanks, or are quite irrelevant to the national policy landscape. They are there because one of their staff (is it still a member of staff) uploaded a document to RePEc.
  • On Political irrelevance: Nothing can be said about this. Do more downloads and citations suggest demand and therefore relevance? Not really. A paper written by a US based think tanks may be of great interest to researchers in Africa, for instance. Or, what is more likely, citations and downloads only suggest that the issue is of interest to other researchers. A sign that the issue may raise to the public sphere in the future, maybe, but we cannot tell.
  • On the Absence of lessons: On this, the RePEc ranking is honest, at least. It provides advice to climb up the RePEc rankings. It does not offer advice on how to be a better think tank but on how to get to the top of the rankings. 
  • On Negative think tank behaviour: This is such a technical and academic ranking that it is unlikely to do more than promote a more academic approach among those who take it seriously. It may be a good thing for organisations that want to improve their academic credentials. Like the UPENN ranking it does not promote collaboration.
  • What is it for? It is a technical exercise to encourage participation in the broader RePEc project. It sets researchers against each other in competition and through it seeks to increase the number of publications shared through the database. This is a perfectly valid effort for an existing community. But it does not offer more in terms of learning and support.

The RePEc and the UPENN rankings are delinked to politics and that is their most significant flaw.

The ICCG publishes an annual Climate Change Think Tank award. The award focuses on climate change policy so it already defines a very specific (yet global) political space. The criteria used are quite interesting. It combines quantitative measure with what it calls Fuzzy Measures; basically “Non-Additive Measures and Aggregation Operators”, which, I must confess, I do not fully understand.

 

These calculations provide two rankings: “the Standardized Ranking– measures the most efficient think tanks in per capita/researcher terms, while the Absolute Ranking measures the best think tanks in absolute terms, regardless of their efficiency and hence size”. This is interesting but efficient in this case is not related to impact or relevance. It is mainly an issue of size (smaller organisations with lots of publications and participation in lots events are better ranked).

So on the same criteria, to be fair:

  • On Transparency: The ranking is more transparent than the UPENN one and more straightforward than the RePEc one (but it is my fault that I got confused with the calculations). Still, neither disclose the names of the experts.
  • On Accuracy: The ranking is based on an effort to map out think tanks involved in climate change policy around the world. It is unlikely to be exhaustive -it isn’t, I checked. And even if it attempted to, it would find it hard, unless it had a domestic panel (for each country/region) to decide which were in and which were out. On the plus side the ICCG has put an effort into looking at the think tanks, for instance, they only consider the climate change departments of some organisations that specialise on a number of other issues. There are some accuracy issues with this, too, however, as the information is mostly reported by the think tanks themselves.  And I wonder if the results would be affected as a consequence (.e.g DIE in Germany may have been taken as a whole). But the separation also raises another point: Can you separate one part of the think tank form another? Would Brookings be what it is if you got rid of one or two of its departments? Would its Energy and Environment research area be as strong as it is without an Economics or International Affairs section? This is the same as attempting to separate research from communications when trying to explain influence. It is not possible.
  • On Political irrelevance: Because it is global it fails to address domestic politics. It attempts to compensate by focusing on international events as a proxy to international politics but I do not feel this is a strong enough. It would have been better if it attempted to show examples of how the winning think tank have shaped the global (or regional, national) climate change policy scene. The expert panel could have done this.
  • On the Absence of lessons: Again, without a final ‘why did they rank higher’ illustration, it is hard to say. After all, the ranking does not mention quality. It assumes it with peer review journals but this biasses the results in favour of Us and Europe based centres. The ranking does offer some interesting statistics but the UPENN provides more ‘broad-stroke’ analysis, in any case. How to do better, then? The questionnaire suggests how.
  • On Negative think tank behaviour: If the entries are not systematically checked them this ranking may promote good old fashion lying. But besides this I doubt it concerns many think tanks. The think tank map that accompanies the think tank is probably the most useful aspect of it. Some how, I feel that the real value of the ranking is in developing an up to date database of think tanks working on climate change. A data base that can become a network. One that does not compete but seeks to collaborate.
  • What is it for? The ranking feeds the think tank map which encourages and supports collaboration between think tanks focused on a common objective. It draws these organisations towards the ICCG as the convenor and supports it own objectives. A useful tool as a map/list of think tanks around the world.

All three models described above share another flaw (the main one being the absence of politics): they are poor guides of overall performance. Think tank directors that showcase their high UPENN ranking and board members or funders who demand them are really doing a disservice to their organisations. They brush aside many more important aspects of a think tank (see, for example the case of SIPRI: ranked among the top but an undesirable place to work according to its unions). RePEc is less known but could not be used to give an overall sense of the think tanks’ performance. It would be a good judge of its research production, but only if its outputs were picked up by the RePEc database. And ICCG could be too focused on participation in international events; an excellent indicator but not one that can paint a whole picture of the think tank’s performance.

But they offer something that the Prospect Award does not: the opportunity to cover quite a few (and potentially all) countries and regions in a single ranking or exercise. And developing country think tank funders like scalable projects like these.

Is the Prospect Award the Best? Maybe just better

Nick Scott, from WonkComms, said it best:

it’s not about being the best, it’s about being better.

First the criteria:

  • On Transparency: We know who the judges are. We know their names and backgrounds, we know where their ideological preferences lie, and we know which political communities they belong to. So even if we disagree with their judgement, and there will be people and organisations that do, we may understand why.
  • On Accuracy: As fierce as the debate around each category is the debate regarding the boundaries of the definition of a think tank. This was certainly the case in Peru. We know that think tanks are not all the same around the world. Comparing Brookings in the US with CASS in China is not as straight forward as the ranking suggests it is. Every society has its own space for think tanks. An organisation that may not count as one in the UK may very well be one in Peru. And within the same country, sectoral differences may account for organisational differences too. Each case has to be considered with attention and care. This is possible in the Prospect Awards model; not in the global ranking.
  • On Political irrelevance: The national (or supra-national body) dimension of the award makes it politically relevant. The award does not just go to a think tank that produced excellent research, this is the role of academic departments; it has to be relevant, too.
  • On the Absence of lessons: Because the judges share the reasons behind their decisions the winners and the losers have plenty of information to consider to improve their performance.
  • On Negative think tank behaviour: When thinking about taking the Prospect Awards to Peru they aspect of the competition that I liked the most was (and is) the awards ceremony. Once a year, British thinktankers, often too busy to talk to each other over a glass of wine or a beer, come together to celebrate their community and their world. Good work, regardless of who wins. The real value of the Prospect Award does (and that I hope the Premio PODER will be able to emulate) is in contributing to the development of a healthy think tank community. Everyone wins, even those who do not get a mention on the night, because think tanks are celebrated for the good work they can deliver.
  • What is it for? This is obviously a ‘profitable’ project: it attracts sponsorship (not yet in Peru) and provides the magazine with excellent contacts among the very same community it tries to reach. But the award serves other purposes, too, even if these were not intended at first (and these have guided the effort in Peru): they celebrate the community and offer opportunities to learn from each other.

The value of a think tank to its community and broader society, its influence, the outcome of that influence, these are all subjective questions. Quantifiable measures can help. RePEc and the ICCG manage to do this well. Popularity is also important; the UPENN ranking is, in essence, a popularity contest. But in the end, it is what those involved in the same political space as the think tanks think about them and about the impact they had on it, what matters.

This year, the award for the top UK think tank went to the Institute of Fiscal Studies, who also won the prize for the economics and finance category. This is what the Prospect panel had to say:

Economic and Financial Think Tank of the Year

Shortlist:

Institute of Economic Affairs

IPPR

Institute for Fiscal Studies

Resolution Foundation

WINNER: Institute for Fiscal Studies

The National Institute for Economic and Social Research did strong work this year on the economics of Scottish devolution. The Social Market Foundation was also applauded for its work on challenging the idea of a “Squeezed Middle” of low to middle income earners who are coming under special pressure. However, these were not on the shortlist.

The Institute of Economic Affairs was short-listed for its clarity of message and the robustness of its policy positions, especially in its opposition to High Speed Rail. A close association with the 1922 Policy Committee has acted as a strong conduit for its ideas to the heart of government. A notable aspect of the IEA’s output has involved engaging with ideas and individuals with views that differ sharply from its own free market libertarian standpoints, not least when it invited Stewart Wood, Ed Miliband’s chief adviser, to speak on the problems facing the UK economy.

The Institute for Public Policy Research was placed on the short list for its trenchant thinking on the economy, not least on stimulating growth in the north of England. The ability of Britain to encourage greater growth in its northern cities will be of crucial importance for delivering a balanced national economic recovery and the IPPR, through its IPPR North office, is uniquely well-placed to contribute.

The Resolution Foundation also made the shortlist, for pressing ahead with its analysis of the problems facing Britain’s low and middle-income earners. The organisation asks pressing questions on insufficient housing provision, the potential threat posed by higher interest rates, dwindling living standards and how these problems should be addressed.

But this year’s winner was the Institute for Fiscal Studies, an organisation that has crawled all over the government’s figures and has been hugely prominent in the fiscal and economic debates of the last 12 months. The insights delivered by the IFS’s research have been of significance, not least the repeated point that the government has not yet completed its intended cuts and that more fiscal pain is to come.

Each of the statements above offer ideas for future action. They provide context and address the importance of political relevance, they praise both research quality and creative communication, they offer points of comparison even among organisations studying entirely different issues, and they acknowledge, by sharing their own deliberation, the inevitably complex and subjective nature of any effort to rank think tanks.

The case I make, then, is for more Prospect-like awards at the national level. There may be countries, with too few think tanks, where this may not be possible. There, instead of a think tank award, one could develop research awards open to all kinds of organisations.

Such awards celebrate the good work of think tank and researchers, makes them more visible across sectors and therefore attracting the interest of new potential users and funders of their work, and contributes to the development of the think tank community as a whole.