This year, instead of ranking think tanks lets think about them more carefully

26 November 2013

David Roodman and Julia Clark from the Center for Global Development have posted a very interesting reflection on the now unfortunately famous McGann think tank ranking. In Envelope, Please: Seeking a New Way to Rank Think Tanks, they offer an alternative to a global ranking exercise.

By now most people working in or for think tanks must have received a few (more than a few, actually) email reminders to submit nominations for the ranking. I have even heard of  emails ‘threatening’ a fall in the rankings for those think tanks not willing to participate. But this is still no more that rumour.

Criticism to the ranking on this blog has been rather consistent (See: on rankingsAnother year, another ranking of think tanks (and surprise surprise, Brookings is still the best)Goran’s recommendations on think tank rankingsThe mighty influence of think tanks (in the US)And the winner is: Brookings … but, once again, the loser: critical analysis, and The Go to Think Tank Index: two critiques). Even before, when I was working at ODI, I felt that the effort put into this ranking exercise could make a more important contribution elsewhere (I should stress that McGann’s expertise on think tanks is not under question here: in fact I wish he used his time and the time of his many assistants more productively). The process, and the ranking itself, is in my view (and that of others) inherently flawed: It confuses visibility with influence on the substance of policy and politics.

Roodman and Clark offer an alternative: not a ranking but an exercise in attempting to measure the aspects of think tanks and their actions that can be measured. This may considered by some as being too cautious; but their caution is based on experience:

Our experience with building policy indexes such as the Commitment to Development Indexmakes us keenly aware of the limitations of any such exercise. Think tank profile is not think tank impact. Fundamentally, success is hard to quantify because think tanks aim to shift the thinking of communities. But the operative question is not whether we can achieve perfection. It is whether the status quo can be improved upon. Seemingly, it can be: it only took us a few weeks to choose metrics and gather data, and thus produce additional, meaningful information.

And this meaningful information has provided ample opportunities for a meaningful discussion. So what the McGann ranking has failed to do year after year, Roodman’s and Clark’s exercise has managed in a single post. The authors identify four key methodological issue that could open up several lines of very interesting reflection (I quote in full to encourage others to engage with their own reflections and maybe suggest alternative solutions to the challenges they faced):

  • Who to include: For this exercise, we’ve limited the list American tanks on GGTTT’s “special achievement” lists, but more could be added. Furthermore, the definition of a think tank isn’t cut and dry (see Enrique Mendizabal’s useful post). Should we only include organizations whose primary purpose is research (i.e., unlike the Friedrich Ebertand Konrad Adenauer foundations, which are primarily grant-making institutions)? What about independence from government, political parties and educational institutions? One option is to follow Posen’s 2002 analysis, which included only independent institutions (excluding RAND) with permanent staff (excluding NBER).
  • Unit of analysis: For now, we’ve been looking at data for the think thanks themselves. A more complete picture might also include stats on expert staff. But this is no easy task, and it begs further questions (as Posen also noted). Should think tank performance be based on the institutions themselves, or on the sum of their parts? What about visiting or associated fellows? What about co-authorship, multiple affiliations and timelines (people move)?
  • Time period: The current data varies in time period: social media is a current snapshot, media and scholarly citations are aggregates from 2011–12, and web stats are the average of a three-month period. Ideally, the time period would be standardized, and we would be able to look at multiple years (e.g., a five-year rolling average).
  • Quality: The analysis currently includes no indicators of quality, which is often subjective and hard to quantify. When research is normative, ideology also gets in the way. Who produces better quality material, the Center for American Progress or the Cato Institute? (Survey says: depends on your political orientation.) It’s tempting to try and proxy quality by assigning different values to different types of outputs, e.g. weighting peer-reviewed articles more than blog posts because they are “higher quality.” But assessing publication importance (like JIFdoesn’t work in academia and it would be even more inappropriate for policy-oriented researchers and analysts. Think tank outputs are most used by policymakers who need accessible, concise information quickly. They don’t want to pay for or wade through scholarly journals. Not only that, but recent studies suggest the importance of blogs for research dissemination and influence.  The NEPC offers reviews of think tank accuracy, but not with the coverage or format that this project would need.

Should we focus only on what can be measured? I do not think so. I think that subjectivity is important when assessing the contribution of think tanks to any society or community because the value of a think tank to that community is subjective. After all, when assessing value we have no other way but to ask, directly or indirectly, whether those who use, or could use, their research and advice, value them or not. But subjectivity can be managed better when the policy space or the community is more clearly defined. Comparing think tanks across an entire continent offers no valuable insights unless a common playing field of characteristics is used. Argentinean and Brazilian think tanks are more likely to feature on a Latin American ranking than Bolivians but they are not likely to have much influence over Bolivian policy. Surely Bolivian think tank can learn from their peers in Brazil but it does not help to rank them against each other.

Location then is a key unit of analysis; and a methodological issue that is absent from CGD’s list above. When comparing think tanks we should think hard about the space that these organisations share. Comparing think tanks in Indonesia would be better than comparing think tanks in South East Asia. But comparing regionally focused or foreign policy think tanks in the region may be better than just looking at these in a single country. Similarly, comparing sub-national think tanks to national think tanks may not be a straight forward affair. While their strategies may be the same their policy audiences are likely to be different and the scale of their influence incomparable: sub-national think tanks are more likely to focus on influencing policy at the provincial or state level while their national peers would be expected to operate in national or federal spaces. As a consequence the visibility and overall influence of the national think tank may be much greater than the sub-national one: but it would not be appropriate to rank one before the other.

Lets hope that the ranking (which is coming) encourages more think tanks to do what CGD has done. Instead of buying into a ranking that they know is flawed (and they should; after all they are supposed to be all for quality research) they should respond by challenging its flaws and searching for more appropriate alternatives and a better use of the information that is now more readily available than ever.