Monitoring and evaluating research communications: overcoming its intimidating complexity

22 June 2019

Monitoring and evaluating (M&E) communications is a key concern for think tanks, and it can become a distressing task. At the Group for the Analysis of Development (GRADE), a research centre based in Lima, we faced a number of challenges while redesigning our M&E for communications approach. We needed to redefine spaces to share and discuss with researchers what works in M&E and what does not, to update our knowledge and tools to manage large and diverse amounts of data, and to ensure communications for research remains relevant to productivity and accountability.

Undertaking the task of enhancing our M&E for communications plan can be a long path of detached efforts without a clear institutional framework. This, of course, can be discouraging. The good news is that we have probably been on the right track without realizing it. We constantly collect monitoring information for different ends, such as gathering figures of visibility and impact for periodical donor reports, transparency reports or institutional brochures that we prepare at the end of the year, or helping researchers draft the dissemination strategies of their projects.

While M&E for communications is meant to be straightforward, we still face problems integrating it into our work. The old belief that it is a separate process from the outcomes and the impact of the research can hinder its contribution to make a difference. This misconception must be overcome in our organizations. There must be a shift from an occasional use of our monitoring skills- when the time to prepare reports arises- to a steady overview of the strong link between monitoring results and research impact. Considering the full extent of the impact and influence of research can make a relevant difference.

As an 2019 OTT-TTI Fellow, I have had the opportunity to participate in stimulating discussions about my concerns, successes and challenges regarding the measurement of the impact of communications under the experience of long-tracked consultants. The materials highlighted here have helped me more optimistic in the face of the intimidating complexity of doing M&E for communications. Throughout the years, I have been convinced that scrutinizing the effects of our dissemination outputs can provide a significant perspective on the management and quality of research, and can inform future actions and outreach strategies. In the following lines, I summarise the main ideas I have acquired and incorporated into my work.

Why is M&E difficult?

During the 2019 Winterschool for Thinktankers in Geneva, OTT consultant Stephen Yeo started his presentation with this itchy but current question. Three responses emerged during the discussion. First, due to fear of failing or finding fault when there is a lack of consensus among those whose efforts are being evaluated. M&E is often understood as a system to measure only our successes or failures in terms of policy influence, rather than a joint exercise to rethink processes. Therefore, when the trend of the figures or the narratives of impact reveal failure, a stressful handling could escort the following steps. This can worsen if a lack of agreement among project leaders persists about what to do with the results.

Second, due to planning M&E work as a last-minute idea. Since pulling all monitoring information takes plenty of time, we often undertake this process when a donor requests a report or when a board meeting approaches. And thirdly the lack of empowerment of the person responsible for M&E- it is not surprising to find the secretary or the administrative assistant in charge of these duties. This disadvantage in terms of capacities undermines the fundamental role of M&E in understanding of our impact.

When impact is hard to define

The area of influence or impact is difficult to define. There are multiple actors that press for change and, if this arises, it does not mean that our contribution has been the main reason. And even if we were decisive, policymakers may not be eager to admit it.

Thus, our contribution is related to the way in which policymakers think about issues and challenges. Think tanks can improve the intellectual framework of decision-making by inserting new ideas into the policy debate and can strengthen the institutional strategies that support the skills of policy actors to assess and communicate ground-breaking ideas. Stephen Yeo shared two tips to avoid despair when trying to rethink M&E:

  1. All institutions have implemented some type of M&E, even if it is not done systematically. Take advantage of what each project leader has already been collecting. This will help to create the most obvious structure for our reports. Keep gathering what we know is relevant to donors. Adapt your M&E system to incorporate useful data from allies, such as the media. Also, do not lose the old data. Keep an eye on the overall advancement of the work plan and its outputs to identify trends and opportunities to come back to old successes.
  2. Tell compelling stories about your influence. Base them in your Theories of Change. Inspire yourself in the general approaches as suggested in the Outcome Mapping and Outcome Harvesting, the ODI’s Research and Policy in Development RAPID Outcome Assessment episode studies, or in a less conventional approach like the links of policy influence, developed by Yeo and Vanessa Weyrauch, that show how think tanks make efforts to measure impact.

Clarify your objectives and innovate in your metrics

I would like to recognize the guiding role of the RAPID MEL toolkit for communications thinktankers. I highlight two ideas from the report. First, we cannot monitor and evaluate communications if we do not know what we were trying to do in the first place. To ensure that our work is strategic and of high quality, we must rely on a plan that clearly identifies activities and outputs.

Second, we need to go beyond the traditional metrics to address three key dimensions: reach, usefulness, and uptake. Reach is the most basic level in M&E of communications and includes outputs related to the distribution of our work. Usefulness takes our M&E strategy to a higher level, covering the quality of practical and applicable information, and its reception, innovation and relevance to users. Uptake is the critical goal. It is aimed to make sure that our audiences use our research and it has an impact on the decision-making process. The ODI toolkit offers a wide set of questions and indicators to measure each dimension.

During my participation in the 2019 OTT-TTI Fellowship and, particularly, in the enriching conversations with my mentor, I have reflected on the best ways to improve our M&E approach and my role in the process. Recently, my team prepared a presentation with quantitative and qualitative outputs to understand the our researchers can achieve. Beyond the traditional way of rewarding our research production by assigning scores to publications, our aim is to address other ways in which researchers contribute to influence, e.g. relevant meetings with policy-makers or collaborations with specialized media.

The progressing uptake of these ideas at GRADE responds to both trial and error and the key support of our Executive Board, which has encouraged me to propose new ways to improve our M&E practices. In this favorable scenario, I believe that the main challenge has been to reconcile the initial willingness of our researchers with their own timing. Even if researchers are enthusiastic about cooperating, it is difficult to add new tasks to their usual way of working. Currently, our information manager and our unit of institutional projects and professional development are responsible for collecting data from researchers, projects and training activities. The discussion on the findings has set a promising path to strengthen the positioning of research communications efforts and its role as a barometer for the potential of our influencing capacity.