Scaling creates dynamic change that calls for dynamic evaluation – continuous learning before, during and after scaling. Dynamic evaluation helps us to apply the other three principles by understanding when scaling is justified, what is the optimal scale, and how to coordinate.
An introduction to Guiding principle 4: Dynamic evaluation
- How have you incorporated dynamic evaluation into scaling processes? What have been challenges and opportunities?
- Have you used results of dynamic evaluation to inform an approach to scaling impact – if so, how? If not, what inhibited this?
General conclusions from the discussion
Each conversation leading up to the session on dynamic evaluation highlighted its importance to informing and upholding the other three guiding principles. Scaling advisors raised the importance of a learning culture within organisations to enable dynamic evaluation. In environments where evaluation is understood as mainly an accountability-oriented tool, there may be a reluctance to use approaches that will surface the information needed to identify and pursue optimal scale. There is also a need to think beyond individual projects when taking a dynamic evaluation stance, given that scaling processes typically span multiple projects—dynamic evaluation means we need to consider what has been done before, and ask questions that could inform potential future efforts to scale for optimal impact. To do this requires sufficient resources for evaluation as well as flexibility around what will be assessed.
The following statements do not reflect all the opinions or reflections presented during the session. In some cases they reflect the ideas presented and shared in their own working groups. We have kept the name of the person who shared this particular idea during the session.
[Tobias Schönwetter, University of Cape Town]
- With multi-cycle projects, change often happens a long time after the cycle. So we make sure we document what we did in previous cycles, to be able to think about impact.
[Enrique Mendizabal, OTT]
- Scaling interventions and evaluations may involve different approaches, methods and expertise at different stages of the process.
[Blanca Llorente, Fundacion Anaas]
- Dynamic evaluation requires building an evaluative and learning culture within organisations. How do we build and nourish an evaluation culture? What incentives, what communications tools do we use to build that culture within our groups? One way is to incorporate ‘learning’ into the roles of every member of the team.
- Evaluations are always political. Evaluating a scaling effort by (or with) the government can be highly political.
[Johannes Linn, The Brookings Institution]
- Dynamic evaluation demands that we look beyond the ‘what’ and focus on the ‘how’ – how to justify scale, how to determine the optimal scale, how to partner or collaborate and so on. It requires a focus on learning rather than accountability.
- Evaluation needs appropriate resources. The monitoring, evaluation and learning (MEL) working group of the Scaling Up Community of Practice has concluded that about 20% of budgets for pilot projects should be devoted to MEL, rather than the common 3–5%.
[Tatiana Rincón, Fundación Capital]
- Dynamic evaluation requires resources. Funding is often limited for this role but we recognise that it is central to any scaling effort.
- It’s necessary to invest in developing more cost-effective evaluation techniques. Evaluation resources are often focused on evaluating the impact of the pilot intervention through randomised control trials, which are resource intensive. But evaluation is needed throughout the scaling effort and other methods are sometimes more appropriate and affordable.
[Hayley Price-Kelly, IDRC]
- An evaluation of scaling efforts often involves multiple projects that may not be formally connected to each other and this presents a challenge.
- Dynamic evaluation can mean a whole load of things depending on where you are in the process. Sometimes as we move through projects and start to move towards a more specific scaling intention, it’s about making sure you’re doing the right things to understand justification, optimal scale, coordination.
[Leslie Fierro, IDRC]
- Dynamic evaluation pushes the ‘how’ and ‘why’ rather than the ‘what’ questions.
- There’s often an assumption that the effects will be visible right away. In a dynamic evaluation context, it’s really important for funders to be thinking about what the lag time is between an effort and what we expect to see, and how that changes over time. We tend to think about this with outcome-focused evaluation.
- Evaluative thinking is to continuously question assumptions that are part of our implementation process, to be able to continuously learn and adapt.