Monitoring, evaluation and learning in a South African think tank

6 October 2016
SERIES MEL for think tanks 6 items

[This article is based on the experience of designing a MEL system for a 3 year project, which also included conducting a baseline, mid-term and end of project evaluation.   The MEL support project ran between 2013 – 2015.  The team included Dena Lomofsky of Southern Hemisphere and Enrique Mendizabal of On Think Tanks.]

I recently had the opportunity to work on a project that involved the development of a monitoring, evaluation and learning (MEL) system for a project called Global Economic Governance Africa (GEG Africa), which at the time was being implemented by a highly regarded think tank, the South African Institute of International Affairs (SAIIA), together with a network of think tanks across Africa. The project is funded by the Department of the UK’s Department for International Development (DFID). The aim of developing the MEL system was to enhance the capacity of the project to measure its level of policy influence through greater standardisation and systematisation of data collection and analysis. The ability to measure and assess the level of policy influence would, in turn, enable the organisation to reflect on, learn from and adapt its strategies and activities to build upon strengths and overcome any discerned challenges as a means of heightening effectiveness and facilitating achievement of the think tank’s anticipated policy influence outcomes.


Learn more about Monitoring and Evaluation of Policy Research:

On Think Tanks School Monitoring and Evaluation courses

On Think Tanks School Webinar on Monitoring and Evaluating Policy Research


SAIIA had been implementing monitoring and evaluation over the years, but wanted to use this new project to rethink their approach to MEL to include a greater outcomes focus.  The intention was to learn through implementation of MEL at project level and, from there, to mainstream it into existing MEL processes.

In this blog, I would like to share some of the many lessons that I learnt about the process of developing an MEL system for a think tank and about MEL for think tanks in general.

Lessons about the process of developing a MEL system

Building on what is already there

The process of developing an MEL system for SAIIA began with an M&E situation analysis. This included a close scrutiny of the organisation’s existing M&E resources and structures, with the aim of ensuring that the MEL system worked with or built upon ‘what was already in place’. By doing so, we avoided duplication of resources and structures, enhanced alignment with existing data collection methods and reporting systems, and encouraged buy-in and support of the MEL system amongst staff.

These considerations are important given that SAIIA, like so many think tanks, often experiences capacity and time constraints. Therefore, a streamlined and efficient MEL system is important. In addition, think tanks often have a number of different reporting formats and requirements that they are obliged to adhere to; for example, those of donors. Alignment of the MEL system with existing donor and internal reporting requirements was therefore a key consideration that emerged during the M&E situation analysis.

Consolidating what you have put in place

A second important lesson emerging from the process is that sufficient time needs to be allocated following the development of the MEL system to ensure that it is well-consolidated and sufficiently embedded within the day-to-day functioning of the organisation. Key considerations here include the following:

  • Have the roles and responsibilities for M&E been clarified and allocated? This is important to ensure ongoing and efficient implementation of the newly developed MEL system.
  • Have all key think tank staff been orientated to the new MEL system? Conducting a staff orientation – and ensuring that all of the key staff are present and involved in these workshops – is essential in ensuring that the MEL system has the staff’s full understanding and support.
  • Have all necessary MEL system support structures been identified and – where possible – set in place? For example, it is important to consider the adequacy of an organisation’s information technology (IT) infrastructure and staff capacity. Any additional support must be identified and the necessary processes set in motion timeously to ensure that such requirements can be met as far as resources will allow.
  • Has sufficient time been allocated for an MEL system run-through and review? Our MEL development process included a mid-term review of the project, where we could ‘test’ the project logframe, indicators and data collection tools. This provided us with invaluable insights into potential system glitches as well as the changes required in the project logframe. For example, it was noted that the logframe’s higher level outcomes had to be altered to make them more ‘measurable’ and achievable within project timeframes. Hence, the initial outcome of “Enhanced articulation of South African and African interests in the G20 and other global governance initiatives” was changed to “The growth of a GEG policy community in Africa, with a specific focus on BRICS and G20.” Logframe and system adjustments were then carried out prior to full-scale MEL system roll-out.
  • Has sufficient time been allocated for mentoring of all MEL staff in the organisation once the MEL system is designed? Again, this is an important consideration to ensure that all key staff are familiar with their roles and with the newly developed data capture, analysis and reporting procedures. Budget constraints often mean that post-MEL system roll-out mentoring is not made available. However, as we discovered, this should be included in the MEL process to ensure that there is adequate support on hand – particularly for those staff members who may not have a background in MEL or who have multiple tasks to perform within the think tank.

MEL for a Think Tank – some general observations

The Theory of Change

Establishing direct causal links between inputs, outputs, outcomes and impact in policy dialogue is often a difficult and complex undertaking. + Similarly, it is rarely appropriate to attribute policy results to a specific programme or the interventions of a single think tank as a result of the multiple variables at play in the policy arena.

Developing a Theory of Change for the think tank project was extremely useful in that it allowed project stakeholders to explore outcomes at different levels via the development of pathways of change. This provided a framework that allowed the stakeholders to assess the degree of contribution by their intervention to policy-related outcomes. In addition, it allowed project staff to think about the incremental and often subtle changes that need to happen in order to attain and sustain the large-scale, higher level and more prominent outcomes. First level outcomes were therefore identified as being changes in policy influence capacities, including knowledge, understanding and practice; while the next outcome level related to changes in institutional response; for example, increased responsiveness of African institutions to issues of global economic governance (GEG). This would then contribute towards the growth of a GEG policy community in South Africa and the rest of Africa, with a specific focus on BRICS and the G20, at the next outcome level.

When assessing outputs, the emphasis was placed on assessment criteria such as project quality, relevance and timeliness. This is based on the theory of density, which proposes that a research project will be more successful if it can deliver what the target group/s need, when they need it, in sufficient quantity, and in a suitable or relevant manner.

MEL guideline document

A comprehensive guideline document was prepared for the MEL system. This guideline is generally prepared to ensure that MEL systems and methods for data collection and, specifically, data analysis are clearly articulated for reliability and data standardisation. However, the preparation of a comprehensive guideline document was also considered crucial given the current limited number of M&E specialists in South Africa who also work in the world of think tanks.

MEL tools and reporting formats

A considerable amount of time was spent ensuring that the M&E data collection tools were standardised. This was done to enable quantitative measures and reporting on feedback from key stakeholders. Furthermore, the M&E data collection tools and reporting formats were designed around particular indicators. Data could then be collated in different ways depending on reporting template requirements.  Finally, each tool was clearly linked to the indicator/s that it would be gathering data on to ensure that staff had a sound understanding of which tools and data sets were related to each indicator. This, in turn, would facilitate data collection and collation, enhance perceptions of data relevance, and encourage accurate and efficient reporting across multiple reporting templates.

What can we conclude?

The design and implementation of an MEL system is a journey that requires an organisation’s commitment of time, energy and resources. The organisation also needs to be willing to reflect and to learn. The rewards of this process, and the motivation that it brings to staff, together with the improvements that can come about from implementing the lessons, are well worth the effort.