Monitoring and evaluation: Lessons from Latin American think tanks

24 June 2013

The Think Tank Initiative organised, back in April, a workshop on Monitoring and Evaluation for Performance and Impact in Lima. It was an interesting meeting, out of which came a few (I hope) useful blog posts:

An internal report was prepared by the facilitator, Beatrice Briggs, and with her permission and that of the TTI’s, I have picked out a few interesting bits of information that I think are important to share. I’ve made an effort to keep things entirely anonymous; Chatham House Rule.

In preparation for the event, the think tanks were asked to answer a series of questions. The first one related to their M&E strengths -or what they were doing particularly well; the second to the challenges they sill faced -or what they were not doing particularly well; and the third refers to the kind of information that they were collecting. There were, of course, other questions but these are the ones I’ll be sharing in this post.

What were the think tanks already doing?

I have arrange some of the statements provided by the think tanks according to four categories: planning, systems and tools, learning and feedback, and human resources. A surprising finding from the process (and the workshop itself) was that most of the think tanks had a person (or more) dedicated to M&E. This is not something I often come across before. Should there be a dedicated person or is this something that everyone in the organisation should do. I feel that this responds more to pressure from funders for accountability purposes than a genuine interest for learning by the think tanks. But, let’s see.

Many readers will find the following statements rather typical or at least familiar to them and their organisations.

Planning:

  • We have an internal monitoring strategy and achieve quite effectively the implementation of the monitoring instruments designed.Human resources
  • Theory of change and defined strategic plan
  • Have a Project Approval Committee that allows us to take the first step in the monitoring of projects
  • The organization’s work is based on a five-year strategic plan, to the elaboration of which the entire team of the organization participated, giving everyone an important stake in its success. Every year, the planning is evaluated against the new context, national and international, and each area carries out operational planning. This allows the each program to be completely aware of the objectives of its program for the year, and for the longer-term, and therefore gives great unity to the monitoring and evaluation processes.

Systems and tools:

  • Incorporation and use of tools that permit us to monitor both the activities in which oe an internal monitoring strategy and achieve quite effectively the implementation of the monitoring instruments designed.
  • We conduct a semi-annual monitoring report that identifies strengths and weaknesses, and includes general and specific recommendations for each research team.
  • We perform quarterly and annual reports on the institutional political incidence based on each research team
  • We have certain tools for registering and collecting information: activities bulletin, media scan, surveys, monitoring of the performance of the communication channels (web page, social media), etc.
  • The lessons learned in the course of more than 30 years of institutional life, a period in which we have passed through different stages and implemented different strategies, have allowed us to define a clear institutional mission

Learning and reporting back:

  • We perform quarterly and annual reports on the institutional political incidence based on each research team
  • The organisation’s work is based on a five-year strategic plan, to the elaboration of which the entire team of the organisation participated, giving everyone an important stake in its success. Every year, the planning is evaluated against the new context, national and international, and each area carries out operational planning. This allows the each program to be completely aware of the objectives of its program for the year, and for the longer-term, and therefore gives great unity to the monitoring and evaluation processes.
  • The lessons learned in the course of more than 30 years of institutional life, a period in which we have passed through different stages and implemented different strategies, have allowed us to define a clear institutional mission.

Human resources:

  • Our Institutional Communications Unit conducts monitoring of the media.
  • Each department and program monitors the achievement of its activities, projects, budgets, etc.
  • M&E work at the institutional level is being implemented, little by little by different areas/departments and has personnel in charge of this. We have consultancies/studies that will permit us to orient the future M&E system in communications.

What were the think tanks main challenges or things that were not working so well?

Overall, my sense was that most of the think tanks had gone a long way in thinking about M&E and establishing the basic building blocks for it. Where they seem to struggle is on how to institutionalise these small yet important steps.

I have organised these statements according to four broad categories: general challenges, what is next?, barriers, and resources (as before, some are repeated):

1) General challenges and things that are missing:

This is interesting as some of the challenges refer to the nature of influence itself but most (and I have emphasised this here) focus on methodological and organisational challenges:

Nature of influence:

  • The relatively [long and uncertain] lapse of influence often lessens/affects the daily lobbying work with certain decision makers

Methodological challenges:

  • We still have not succeeded in developing indicators of the impact of our research in public policies that monitor the consideration and application of the recommendations of our studies among the policy makers, how much impact they have, the extent of the impact, etc.
  • We lack the development of tools to measure and understand better the impact of research in 1) public policy influence 2) at regional reach 3) international reach.

Organisational challenges:

  • Do not align the objectives of particular projects with the strategic planning
  • We do not have a formal M&E process (and tools) at the overall management level of the organization, and even less one focused on our public policy influence
  • Lack of articulation between monitoring of media, of policy influence and or activities and products.
  • The result of the lack of role definition means that tacks can be duplicated. By the same token, the lack of a better institutional M&E plan and a strategic timeline for evaluation means that the agreed upon deadlines are not met.
  • Absence of systematic means of communicating to our Communication Dept. the advances and achievements in the different areas
  • We lack tools for systematic monitoring of public policy influence and public opinion
  • No formal mechanism for sharing the monitoring results of what the departments and programs are currently doing

2) What is next?

  • These mechanisms are not formally established
  • Continuous updating of maps of actors, not just a list of actors
  • Need to monitor most recent statements, actions of key actors and institutional policies to identify more precisely the contribution (political incidence) of  [the think tank] in such aspects.
  • Improvements could be made in the way some of our programs are evaluated in order to make the impact of our action clearer both internally and externally.
  • Need to reinforce this with mechanism that allow us to monitor and evaluate our influence as well as the fulfillment of the strategic objectives that we set
  • We still have not succeeded in developing indicators of the impact of our research in public policies that monitor the consideration and application of the recommendations of our studies among the policy makers, how much impact they have, the extent of the impact, etc.

3) Barriers:

Some of the barriers refer to the role of individuals while others to absence of key organisational ‘institutions’.

People:

  • Weak culture of obligatory accountability in our environment that also is evident in our daily activities.
  • There is a lack of interest among certain researchers in cooperating with the work of M&E at the institutional level and, especially in providing information in a timely manner

‘Institutions’:

  • Do not have automated processes that allow us to document and follow up on our projects
  • Do not have adequate follow up of the implementation of suggestions made in the consultation mentioned
  • We do not have a structured system related to the strategic objectives

4) Resources (or lack of):

  • We do not have a person dedicated full-time to M&E activities
  • No one in charge of coordinating M&E
  • We have no experience with the design and implementation of a formal M&E system
  • We lack a professional or officer who leads the implementation of the M&E system.
  • The tasks of collecting impact indicators are divided between the members of the Information and Communication Unit; nevertheless, we believe that we still lack a clear definition of who should be the person in charge of implementing and M&E strategy or what the specific role is of the members of the unit.
  • In the communications area we lack tools and staff to carry out a complete M&E job (reaching all the media, monitoring mass mailings, academic activities, among others). By the same token, we lack the implementation of M&E systems in a more professional manner in certain areas: library, bookstore, administration, etc.

My own judgement is that a great deal of emphasis has been placed on human resources. I am yet unsure about this issue. Should think tanks, especially small ones, have a dedicated team for M&E? Given that most do not have an adequately staffed communications team, is this the best use of their resources? (The jury is out: I have written something related to this here: ‘Tourist’ funders are unhelpful when supporting and evaluating think tanks)

So what information do they collect?

Of course all these systems and tools that the think tanks have can mean very little unless we think about the kind of information that they are collecting -and for what purpose. The following table was put together by the workshop’s facilitator and it shows some interesting things:

  • Project planning, work-plans, etc. are covered;
  • Budgeting and other financial information, too; and
  • There is a big inconsistency in relation to impact’ information, however. I think this last point is important. It reflects the difficulty that this issue presents: methodologically and operationally.

I say there is an inconsistency because my own conversations with the participants suggested that. In general, think tanks are better equipped in gathering data related to activities they control but, not surprisingly, less so when it comes to assessing their influence.

Another point that is worth highlighting related to information related to budgets and financing. While these Latin American think tanks are strong at recording and reporting on their finances and accounting obligations, they are not, as most think tanks, as strong in their capacity to monitor the use of the resources by project or activity. When I asked them (not all the think tanks present, granted) none could say that they had the right systems and tools to say exactly whether a project was over, under or bang on budget.

Kind of information collected Number of institutions that reported collecting this information
Work plan and timeline and advances for activities/projects √√√√√page9image8688 page9image9008
Academic activities, internal and external √√
Publications in various formats (including magazines, DDT, policy briefs, etc.)

√√

page9image14240page9image14560

Researchers participation in different meetings, conferences and networks, both national and international √√
Impact indicators for different activities and publications √√√√√
Policy impact indicators regarding results of our studies and research √√√√√√√
Financing, budget √√√√
Research quantity and quality √√√
Policy influence
Projects to be implemented in the future and their alignment with strategic plan
Opportunities for improving project implementation
Hard facts to tell the institutional story and demonstrate our experience in different fields
Qualitative information about our social impact
Info about how our advocacy has impacted target populations
Satisfaction of users, receivers or clients of our services and products
Information to enrich the strategic plan

Final reflections

The workshop presented the participants with a range of approaches and tools and the opportunity to share their concerns and ideas. In the end, they had a chance to reflect on what they had learned -or at least what they would take with them. The following are statements as recorded by the organisers and the facilitator:

On why is M&E useful for a think tank:

  • M&E helps to order data.
  • M&E provides an opportunity to integrate tasks and processes.
  • Developing a strong commitment to M&E implies generating awareness about the usefulness of these activities.
  • M&E improves the performance of the researcher, the consultant in all the areas of the organization where M&E is implemented. In short, it improves organizational management.
  • M&E can improve organizational performance only if the team is committed to doing the work. Knowledge management is very important.
  • We cannot achieve new results doing what we have always done.
  • M&E provides information not only about what you are doing but also about what could be done, starting with what you are already doing. It is a useful tool for looking ahead.
  • M&E asks “What more should we do?

On evaluation culture and key roles:

  • The executive director is doing M&E every day.
  • Researchers make little attempt to include the culture of evaluation. The institutional challenge is to generate an M&E culture. To achieve this, the leader or leaders of each organization need to be committed with constant M&E in all areas.
  • Rather that a “strong” commitment to M&E we should strive for a “reasonable” commitment.
  • Above all, it is necessary to create, maintain and develop a culture of evaluation, and find the times to make use of M&E. An evaluative culture is created by competition. This helps people to evaluate or assign value to the projects they are doing.

Remaining challenges

  • We should integrate resource mobilization with M&E and develop them internally.
  • We need to educate donors about social impact.
  • The challenge of all organizations is to discern the way to apply all this learning.
  • Organizations must commit to educate the rest about the lessons learned and be capable of transmitting this knowledge.
  • We must create or rely on systematization that is useful and effective for our organizational purposes. To do this it is important to have incentives. To generate greater commitment and involvement that is not the result of crisis situations or of complex processes, but rather comes from the need to make use of new tools, including a new way of looking more fully at what evaluation can help us accomplish.
  • A sense of belonging provokes interest.
  • Sometimes there can be too much M&E.