Monitoring and evaluating (M&E) communications is an important, but sometimes intimidating, task. At my Lima-based think tank Group for the Analysis of Development (GRADE), we recently redesigned our communications M&E approach. And faced a lot of challenges!
Luckily, as an OTT-TTI Fellow I had several stimulating conversations with peers about my concerns, successes and sticking points for communications M&E, and thanks to these conversations and resources I now feel much more optimistic (and less intimidated) by the task.
Here I share the main ideas and resources I have acquired and incorporated into my work:
Why is M&E so complex?
To talk about communications M&E, we first have to talk about M&E more broadly. The question: Why is M&E so complex? was posed to us by OTT advisor Stephen Yeo at the WinterSchool for thinktankers. And four clear answers emerged during the discussion:
First, fear of showing failure or finding fault, especially when there is a lack of consensus among the people whose efforts are being evaluated. M&E is often thought about in terms of success and failure, rather than a collaborative, iterative exercise in learning.
Second, it is an afterthought. Gathering data takes time. But it’s common to leave it to the last minute, when the donor has asked for a report or the board meeting approaches.
Third, the person doing the M&E doesn’t have the necessary skills. It’s not uncommon for administrators and assistants to be given these additional duties, rather than making it a core part of someone’s role.
Fourth, influence or impact is difficult to define when it comes to policy research. As is determining the contribution you, versus other factors, have made towards a change. Stephen shared two tips to avoid despair when trying to rethink M&E:
- Use existing data. All institutions will have implemented some type of M&E, even if it’s not systematic. Take advantage of what you already have and use it to structure your reports.
- Tell compelling stories. Base your stories of influence on your theories of change. Suggested approaches include Outcome Mapping and Harvesting,+
the RAPID Outcome Assessment+ or the links of policy influence+ approach.
Overcoming its complexity for communications
There’s not a lot of guidance out there specifically on communications M&E. One useful resource to help overcome the complexity is the ODI’s Communications monitoring, evaluation and learning toolkit.+
I’d like to highlight two ideas from the toolkit:
First, we can’t monitor and evaluate communications if we don’t know what we were trying to do in the first place. To ensure that our work is strategic and high quality, we must plan and manage our communications activities and outputs well.
Second, we need to go beyond the common metrics for measuring ‘success’ that typically are all about reach. Instead we can address three dimensions: reach, usefulness and uptake.
Reach is the most basic level of communications M&E, related to the breadth of distribution of our work (number of downloads or retweets). Assessing usefulness takes our M&E strategy to a higher level, covering the quality and applicability of information disseminated, and how it is received or is relevant and useful to users. Uptake is the highest level, assessing if and how your work is being used. The ODI toolkit offers a wide set of questions and indicators to measure each dimension.
Challenges to implementation
There is an old belief that M&E processes are separate to outcomes and impact. This misconception must be overcome in our organisations. M&E is an essential learning tool that supports outcomes and impact. There needs to be a shift from occasional monitoring when donor reports arise, to more systematic processes throughout the research and communications cycle.
In GRADE, for a long time I felt that our communications M&E efforts were somewhat detached, lacking a clear institutional framework. But the good news is that we were probably on the right track without realising it. We were constantly collecting monitoring data for our periodical donor reports, transparency reports and institutional end-of-year reports, or in preparing research dissemination strategies.
Recently, my team prepared a presentation on understanding the impact our researchers can achieve. We went beyond the traditional way of scoring publications based on download numbers, to look at other ways in which our researchers are engaging people, such as meetings with policymakers or collaborators.
The work of progressing these ideas at GRADE is being achieved through a combination of trial and error, and the support of our executive board, which has encouraged me to propose ways of improving our M&E practices.
But even if researchers are enthusiastic and keen to cooperate, it is difficult to add new tasks to existing ways of working. Therefore, one of the biggest challenges is to reconcile willingness with timing and capacity.
Nonetheless, these discussions have set us on a promising path to strengthen our research communications M&E efforts, and its role as a barometer for our influence and impact.