Four questions to assess your research communications impact

9 November 2016
SERIES Think tanks and communications 26 items

 

Research can have an impact. But only if we get our research to the right people, at the right time, and in a way they can relate to.  Considering how vital communications is to development impact then, when it comes to monitoring and evaluating our work, are we doing enough to understand what works, when and why?

Probably not. Often, communications M&E starts and stops with reporting on download statistics or retweets. But these numbers alone only give us a fraction of the picture. They don’t tell you anything about how someone uses your work – or what you could do differently next time to improve your impact.

So how can we assess and learn from our research communications work?

This isn’t a new question – but it’s still a relevant one. And it’s something ODI has been grappling with for years. In 2007, Ingie Hovland proposed five levels to assess policy influence – designed to go beyond traditional research evaluation of academic peer review and number of citations – and this still forms the basis for ODI’s approach. In 2012, Nick Scott built ODI’s M&E comms dashboard offering a pragmatic tool to measure some of these levels. More recently, an internal ODI working group – led by Caroline Cassidy – set out to think some more about this.

Here are four questions that I think are particularly useful to ask when assessing your comms work:

1. Did you have a good plan to start with?

Audience is (or should be) at the heart of any communications work. If you create a low-quality or irrelevant output, it’s unlikely you’ll achieve much. Equally, if you produce a dozen high-quality outputs but don’t think about who you want to read it and how to get it to them, you’re wasting time and resources.

Creating a communications plan doesn’t have to be complicated or time consuming. But you do have to think about your aim, key messages, who you’re trying to reach and how to reach them.

So when monitoring and learning from your communications, you should ask: did we have a plan for this piece of work and did we follow it? Did it go out on time, in the right way, and to the right people? And what can we learn for next time?

Answer these questions in a quick after action review or meeting, making sure you note down any lessons for next time.

2. Did you reach the right people?

Measuring reach means understanding how many people had access to your outputs. For example, Google analytics can tell you how many people downloaded your report, and social media analytics can give you an idea of how many people might have seen or shared it.

But breadth of distribution isn’t always the aim of the game. Outputs are designed to reach a target audience, and this can be narrow; 500 people might have downloaded your report, but this doesn’t mean much if they weren’t who you wanted to influence.

You can use digital analytics to dig deeper into whether or not you reached your target audience. Such as looking at the location of people downloading your work (for instance, if you’re writing a report for Nepalese researchers, downloads in Nepal is a good sign).

3. Was your work high-quality and useful?

Measuring the technical quality of your communications is critical. Ground-breaking research won’t go very far if it isn’t considered trustworthy by your audience, or if the key findings are buried in the middle of a 100-page report.

Questions to ask yourself include: is it factually accurate? (Errors are a fast track to your work being seen as unreliable, so peer review and fact-checking is essential.) Is the spelling and grammar correct? (These types of mistakes can be distracting and make you seem less credible.) Are the key messages clear? (A good executive summary is a must for research reports.) And is the language accessible and appropriate for your audience?

The ‘usefulness’ of your work is closely linked to quality, but involves thinking a bit more about your audience and how they interact with it. Was it relevant, timely and appropriate to them?

To gather this sort of information, look at whether people are sharing your work, as well as what they’re saying about it. Tweets, blog comments and event feedback forms are all good places to find this information.

4. How was it used?

In research communications, the dream is that your intended audience uses the information to inform a decision, or even change their behaviour. This is much harder to assess.

An impact log is a useful tool to capture formal and informal information about how your work is being used. Formal information may include citations or feedback surveys. Informal feedback might be an email from a colleague, or even a conversation.

Finally, how to tie it all together? The information you gather here should start to build a story about what you’ve achieved. Otherwise, it risks being just a random set of indicators. It’s important to link what you learn back to your original plan, in order to know if you’re on the right track or if you need to do things differently. And the more you do it, you’ll start to build an evidence base of what works, when and why.

Of course, understanding the impact of research communications is just part of the story when it comes to influencing policy for development impact. There are plenty of tools out there to help you systematically monitor, evaluate and learn from your policy engagement work. But the information you gather from these questions should feed into wider monitoring and learning work, not be separate.