Skip to content

This year, instead of ranking think tanks lets think about them more carefully

Think Tank of the Year Award

David Roodman and Julia Clark from the Center for Global Development have posted a very interesting reflection on the now unfortunately famous McGann think tank ranking. In Envelope, Please: Seeking a New Way to Rank Think Tanks, they offer an alternative to a global ranking exercise.

By now most people working in or for think tanks must have received a few (more than a few, actually) email reminders to submit nominations for the ranking. I have even heard of  emails ‘threatening’ a fall in the rankings for those think tanks not willing to participate. But this is still no more that rumour.

Criticism to the ranking on this blog has been rather consistent (See: on rankingsAnother year, another ranking of think tanks (and surprise surprise, Brookings is still the best)Goran’s recommendations on think tank rankingsThe mighty influence of think tanks (in the US)And the winner is: Brookings … but, once again, the loser: critical analysis, and The Go to Think Tank Index: two critiques). Even before, when I was working at ODI, I felt that the effort put into this ranking exercise could make a more important contribution elsewhere (I should stress that McGann’s expertise on think tanks is not under question here: in fact I wish he used his time and the time of his many assistants more productively). The process, and the ranking itself, is in my view (and that of others) inherently flawed: It confuses visibility with influence on the substance of policy and politics.

Roodman and Clark offer an alternative: not a ranking but an exercise in attempting to measure the aspects of think tanks and their actions that can be measured. This may considered by some as being too cautious; but their caution is based on experience:

Our experience with building policy indexes such as the Commitment to Development Indexmakes us keenly aware of the limitations of any such exercise. Think tank profile is not think tank impact. Fundamentally, success is hard to quantify because think tanks aim to shift the thinking of communities. But the operative question is not whether we can achieve perfection. It is whether the status quo can be improved upon. Seemingly, it can be: it only took us a few weeks to choose metrics and gather data, and thus produce additional, meaningful information.

And this meaningful information has provided ample opportunities for a meaningful discussion. So what the McGann ranking has failed to do year after year, Roodman’s and Clark’s exercise has managed in a single post. The authors identify four key methodological issue that could open up several lines of very interesting reflection (I quote in full to encourage others to engage with their own reflections and maybe suggest alternative solutions to the challenges they faced):

  • Who to include: For this exercise, we’ve limited the list American tanks on GGTTT’s “special achievement” lists, but more could be added. Furthermore, the definition of a think tank isn’t cut and dry (see Enrique Mendizabal’s useful post). Should we only include organizations whose primary purpose is research (i.e., unlike the Friedrich Ebertand Konrad Adenauer foundations, which are primarily grant-making institutions)? What about independence from government, political parties and educational institutions? One option is to follow Posen’s 2002 analysis, which included only independent institutions (excluding RAND) with permanent staff (excluding NBER).
  • Unit of analysis: For now, we’ve been looking at data for the think thanks themselves. A more complete picture might also include stats on expert staff. But this is no easy task, and it begs further questions (as Posen also noted). Should think tank performance be based on the institutions themselves, or on the sum of their parts? What about visiting or associated fellows? What about co-authorship, multiple affiliations and timelines (people move)?
  • Time period: The current data varies in time period: social media is a current snapshot, media and scholarly citations are aggregates from 2011–12, and web stats are the average of a three-month period. Ideally, the time period would be standardized, and we would be able to look at multiple years (e.g., a five-year rolling average).
  • Quality: The analysis currently includes no indicators of quality, which is often subjective and hard to quantify. When research is normative, ideology also gets in the way. Who produces better quality material, the Center for American Progress or the Cato Institute? (Survey says: depends on your political orientation.) It’s tempting to try and proxy quality by assigning different values to different types of outputs, e.g. weighting peer-reviewed articles more than blog posts because they are “higher quality.” But assessing publication importance (like JIFdoesn’t work in academia and it would be even more inappropriate for policy-oriented researchers and analysts. Think tank outputs are most used by policymakers who need accessible, concise information quickly. They don’t want to pay for or wade through scholarly journals. Not only that, but recent studies suggest the importance of blogs for research dissemination and influence.  The NEPC offers reviews of think tank accuracy, but not with the coverage or format that this project would need.

Should we focus only on what can be measured? I do not think so. I think that subjectivity is important when assessing the contribution of think tanks to any society or community because the value of a think tank to that community is subjective. After all, when assessing value we have no other way but to ask, directly or indirectly, whether those who use, or could use, their research and advice, value them or not. But subjectivity can be managed better when the policy space or the community is more clearly defined. Comparing think tanks across an entire continent offers no valuable insights unless a common playing field of characteristics is used. Argentinean and Brazilian think tanks are more likely to feature on a Latin American ranking than Bolivians but they are not likely to have much influence over Bolivian policy. Surely Bolivian think tank can learn from their peers in Brazil but it does not help to rank them against each other.

Location then is a key unit of analysis; and a methodological issue that is absent from CGD’s list above. When comparing think tanks we should think hard about the space that these organisations share. Comparing think tanks in Indonesia would be better than comparing think tanks in South East Asia. But comparing regionally focused or foreign policy think tanks in the region may be better than just looking at these in a single country. Similarly, comparing sub-national think tanks to national think tanks may not be a straight forward affair. While their strategies may be the same their policy audiences are likely to be different and the scale of their influence incomparable: sub-national think tanks are more likely to focus on influencing policy at the provincial or state level while their national peers would be expected to operate in national or federal spaces. As a consequence the visibility and overall influence of the national think tank may be much greater than the sub-national one: but it would not be appropriate to rank one before the other.

Lets hope that the ranking (which is coming) encourages more think tanks to do what CGD has done. Instead of buying into a ranking that they know is flawed (and they should; after all they are supposed to be all for quality research) they should respond by challenging its flaws and searching for more appropriate alternatives and a better use of the information that is now more readily available than ever.

About these ads
3 Comments Post a comment
  1. Enrique,
    Thank you for your kind comments. I have one question for you, prompted by the last paragraph. Do you think we should push this analysis through to a ranking, or just stop where we are, with a set of indicators? It would be easy to make a ranking. But I see two reasons not to:

    1) As you have rightly pointed out, we’re leaving out a lot.
    2) Julia and I work for a think tank and we lack the objectivity and credibility to pull this off, in particular, assess ourselves.

    We’d be keen to hear your thoughts on this.
    –David.

    November 26, 2012
  2. Thanks for the questions, David.

    I do not think you should push the analysis through to a ranking but you could do more with the group of think tanks you have selected. Or maybe a new list” US think tanks working on US international development policy’. If you push through with the identification of ‘measures’ and then start asking questions about them I think you’ll find rather interesting things. A good example of this is Murray Weidenbaum’s book ‘The competition of ideas: the world of the Washington think tanks‘ in which he focuses on DC based think tanks.

    You could also, following your comments about ECDPM, look at international development think tanks (which is not the same as think tanks that impact on international development policy) in donor countries. Compare for example CGD, ODI, DIE, ECDPM. Staff structure, staff turnover (and career paths), size, budgets, funding sources, funding arrangements (grants, contracts, etc.), multi or single issue, research/comms balance, channels and tools used, audience/public segmentation, etc.

    I think you’ll find many interesting things among the similarities and the differences. But you’ll also find very interesting things about the nature of the aid industries in each country.

    Working for a think tank makes it hard for you to assess CGD vis a vis others but it should not prevent you from studying think tanks and the contribution they make to US international development policy (which is your business) and to international development policy and practice more broadly. In a way, I would argue that think tanks should be willing to think critically about themselves. In fact, your credibility would only increase if you were willing to challenge yourselves in the process. I’ve often argued that the best think tanks are led by people who are not just managers but thinking managers. It would not be the fist time I praise Lawrence playing this role. And your post shows that he is not an outlier in CGD.

    In terms of ranking this will depend on the space you choose. It will be hard to rank yourselves but maybe easier to rank others: think tanks working on issues that you are not involved in, maybe.

    But if you want some sort of a ranking then I would attempt to introduce as much subjectivity as possible; like Prospect has done. Maybe, after gathering all that data (what you already did plus more about the think tanks), set up a panel (public and separate from you) to judge the best think tanks of the year or a period. What, in the opinion of the panel has been the best DC based (or US) think tank in development policy this year? And, most importantly, why? For instance, the panel members could say that, in their view, the most important policy process this year has been the US election and the role that Aid played in it. So the best US international development policy think tank would have had to make a difference in it. The panel would expect that the best think tank would have prepared research well in advance, developed the right networks, etc.

    The metrics would then help you to critically reflect on why that particular think tank did so well this time around.

    But the panel could also be encouraged to look beyond the ‘now’ and maybe look for think tanks that while may not have been influential this time around have been working on key important and long term issues. They could then identify a ‘one to watch’ think tank that is busy doing all the right things but still below the radar.

    Again, you could use your metrics to ask yourself questions like: why is it that this think tank can work away on a long term project that may or may not lead to influence? What is it about its business model, its staff structure, etc. that allows it to do this?

    This is a long response to say that rather than a ranking we should seek for ‘interesting’ cases (big or small) that tell us something about all they ways in which think tanks can make a difference (creating spaces, promoting policies, training new generations of policy analysts and policy makers, auditing public policy, educating the public and the elites, etc.) and then ask our selfs WHY is it that they have been able to do this?

    Rankings, like ‘labels, frameworks and tools’ can stop us from thinking: http://wp.me/pYCOD-ya

    Hope this helps.

    November 26, 2012

Trackbacks & Pingbacks

  1. A monitoring and evaluation activity for all think tanks: ask what explains your reach | on think tanks

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 4,797 other followers

%d bloggers like this: