Rating and ranking institutions to drive change: a ten-step guide for think tanks and advocacy groups

31 March 2017

Ratings and rankings have become a staple output of advocacy groups and think tanks worldwide. Nevertheless, I recently encountered a think tank working on a ranking that had so spun out of control that it had fallen half a year behind schedule. Here’s a quick ten-step guide + on how to achieve maximum impact while making life easier for your team.

You can also download this guide here: 10 steps for think tanks and advocacy groups.

1. Start with your policy ask

What exactly do you want to achieve? And what exactly do you want the institutions you are rating or ranking to do to advance your aims? You need to get crystal clear about this because you are effectively designing a system that you want institutions to game. The metrics you use will influence institutional and public perceptions of what matters, and what does not – so make sure your policy goals drive your metrics, and not vice versa. Write down your policy ask in a single sentence and take it from there.

2. Decide whether you want to engage up front

At Transparify we share our rating criteria with think tanks and give them time to improve their performance long before we start rating them. As a result, we achieved considerable impact before we even started our first rating. Many institutions appreciate the opportunity to improve their performance and gain a strong rating result. The downside is that this considerably lengthens the duration of a project and requires some extra team input, which you may need to budget for in advance.

3. Who is your audience?

Journalists will scan your report, phone the worst three performers on your ranking for a quote, and then write up the whole story in 800 words max, so multiple metrics with different weightings are wasted on them (and on the general public). On the other hand, if you want and expect the target institutions themselves to really dive into the details of your data, a bit more complexity may add value. Also, journalists love rankings – no matter how flawed – while institutions may prefer ratings because they allow for multiple ‘winners’ and tend to be methodologically more sound.

4. Develop meaningful metrics

Cautionary tale: I once developed a ranking whose purpose was to make international aid agencies more accountable to local citizens. One metric I used was whether agencies translated all their reports into the local language. This metric was not meaningful because virtually everyone in that small country who actually read those tedious donor reports (probably a few dozen people) spoke English anyway. In effect, I was incentivizing agencies to waste money on translations that nobody would ever read, and which would do nothing to actually improve accountability. Don’t repeat my mistake.

5. Design the visuals

Design the visuals you will use for presenting your results before you start gathering data. Make sure those visuals – usually a simple results table is enough – clearly communicate your policy ask, fit onto one single page, and will look good on social media. If you cannot find a way to present your results in a clear, simple, compact and visually attractive format, revisit your metrics.

6. Write the methodology up front

Completing research design before beginning with data collection is widely regarded as a way to increase the quality and integrity – and thus credibility – of research. Before you start contacting or rating institutions, write up the complete methodology as it will appear in the annex of your report, leaving the spaces for data blank. For example, “Two raters independently from each other assessed 20 institutions during 11-15 May. In XX cases, they returned identical rating results, but in XX cases, the assessments differed. Team Member A then did this-and-that to determine the final result for each of these institutions during 16-18 May.” This ensures that all team members fully understand – and can discuss and modify – processes, timelines and responsibilities before the real work starts. It also enables the project manager to keep track of progress during implementation. Plus, you’ll eventually need to write a methodology anyway, so you’re not wasting any time.

7. Pilot the methodology

Rate a small sample of your target institutions to see whether your methodology works in practice. Keep a log of the time required for each assessment to determine the unit cost per rating, and then multiply the unit cost by the number of target institutions to calculate how much staff time the entire rating process will require. Also, consider contacting the sampled institutions to check if you scored them correctly in order to detect weaknesses in your data quality safeguards at the earliest possible stage. If required, change the metrics, adjust the number of institutions you will rate, and/or rework the entire methodology.

8. Stick to the methodology

Once you start assessing institutions, chances are you will discover some exciting things that you would like to capture in a new metric, or team members will clamour to include additional institutions because they are “really interesting”. Resist these temptations and stick to the script. The ranking mentioned in the introduction slid so far behind schedule in part because team members gradually added more and more metrics and institutions until their assessment sheet ballooned to more than 1,600 data fields. If you really, really want to gather additional data, gather it after you’ve published your first report, not before.

9. Validate results with institutions

Share your methodology and individual results with each institution and give them a chance to point out possible mistakes or oversights. That’s not only sound research practice, it’s also an ethical imperative: If you’re going to publicly name and shame institutions, you must make sure that you don’t accidentally cause reputational damage to those that have been doing the right thing all along.

10. Keep the report short

Now you have your final rating or ranking results, you need to write the report. Keep it short and simple:

  • explain your policy ask and why it is important
  • summarize your headline results
  • include a few lines that journalists and highly ranked institutions can quote from (“…is completely opaque”, “…is a transparency leader”)
  • present the rating/ranking results on a single page
  • copy and paste your methodology into an annex

Remember, most people don’t care about your report, they only want to know the headline results. Transparify’s first report got covered on the front page of the New York Times, but even so, only 1,000 people visited our website on that day, and even fewer read our report. Time saved on report writing is time your team can constructively use to directly engage with institutions, reach out to journalists, write blogs and op-eds, or create funky visuals and spread them via social media.

For further information on Transparify’s experiences with transparency ratings, please read this blog.

Any further questions, comments or suggestions on how to improve this guide? Please email me at [email protected].