Programming for complexity: how to get past ‘horses for courses’

9 September 2011

While discussing the issue of implementing programmes in the face of complexity, a theme came up that will be familiar to many people, and the conversation (along with Quique’s suggestion) prompted me to write up some reflections on it. The importance of appropriate and context-sensitive selection of approaches and interventions in the face of complexity is a common theme in the literature, and in practitioner discussions on the issue. Whether for planning, project management, evaluation, facilitation, or anything else the strong feeling seems to be that the way to recognise complexity is a kind of adaptive pluralism – as Bob Williams put it in our discussion on the Outcome mapping Learning Community, successful management and implementation is all about the application of appropriate methods and tools to fit the context. He put it very well here:

I’m not arguing that anything goes or some vague methodological pluralism, but the appropriate use of tools to manage different aspects of situations.

This is the ‘horses for courses’ argument, and I think it is a powerful one. It was the main message I took home from an excellent conference last year ‘Evaluation revisited: improving the quality of evaluative practice by embracing complexity’ – this emerged as a strong conviction of numerous evaluation experts and practitioners came together to discuss what ‘complexity’ meant for evaluation. And the need to take a balanced and context-sensitive approach to choosing evaluation methods has been a cause of mine for a while.

For me, the ‘horses for courses’ argument throws up two important questions. Firstly, and most obviously: how do you choose the right horse for your course? Whether we’re talking about policy instruments, evaluation methods, or gambling on horses this will never be an easy question. This theme also came to the fore of the OMLC discussion, and the stalwart of the development evaluation community Rick Davies suggested, citing recent experience auditing AusAID’s work on maternal health, that there unlikely to be one ‘right’ answer and in reality there may be many:

there is not even one best evaluation approach for a given project, i.e. a single best horse for particular course. In fact what we need is a best package of approaches..for a given project.

It’s clear that these choices are difficult judgements, and the tacit knowledge contained in expertise and hard-earned experience are required. But while many of these judgements may be hard to formalise, we shouldn’t overly mystify the issue or overly elevate the role of the ‘expert’. On a basic level there must be some way in which the expert decides what might be right for the job – or else we’re coming quite close to saying ‘anything goes’. To put it another way, there must be something the expert can tell the non-expert about the choices facing them.

I’ve recently had the privilege to do some work on the question of how to choose evaluation methods to fit your context, under an initiative which gained some momentum thanks to the evaluation revisited conference. Better Evaluation is an international collaboration aiming to improve evaluation, by helping practitioners choose the appropriate (combination of) tools and methods, and to apply them well (for more information see this brochure, until the full site is up and running). We have been trying to help practitioners choose the right methodological ‘horse’ for their programme’s ‘course’ – this will not be a matter of finding a formula or the right answer, but we’re hoping that a combination of codifying some key elements of a decision, providing fora for experts and practitioners to interact, and instigating some action research on appropriate methodologies should provide some kind of assistance.

The second question that occurs to me from the ‘horses for courses’ argument is: what are we arguing against? Implicitly ‘horses for courses’ is cast against a ‘blueprint approach’, where a few standardised solutions (whether tools, methods, or more generally types of programmes) are rolled out to be implemented in diverse contexts irrespective of contexts.

This is a bit of a straw man: is anyone really advocating for a blueprint to be broadly and crudely applied? Does anyone really consider themselves to be applying such an approach? Because of this, I wonder whether we have got the wrong question in the first place – if the recommendation is so obvious that nobody could disagree, what’s the value in the message? Or, more crucially, since nobody thinks they’re applying approaches out of context, they are unlikely to heed the horses for courses message (although may perhaps listen to the guidance on choosing your horses).

I think that the most important message behind the ‘horses for courses’ argument is that policy makers and practitioners don’t feel there is always the space to choose the right horse for their course. In the face of complexity, certain built-in aspects of the way government organisations, development agencies find themselves at crucial moments locked into things where they wish they weren’t. For me, THIS is where lessons from complexity can really help us. When I looked into how implementation can tackle complex problems, there seemed to emerge a clear set of principles about how to ensure interventions avoid the cookie-cutter approach. Meeting challenge of complexity means allowing maximum space for this kind of judgement (horses for courses) throughout the chain: giving appropriate space and structure for the exercise of

  1. Context-specific judgements at different levels of implementation chains (‘where’),
  2. Prescient and timely decisions throughout implementation as well as before (‘when’)
  3. Using a broad range of perspectives and types of expertise in a holistic manner (‘how’)

Please look to the recent working paper for more on each of these.

Now this isn’t to say that everything has to remain ‘loose’ in interventions which try to tackle complex problems. Another common theme of discussions on complexity is the question: does recognising complexity mean that we have to have interventions which are more and more complex themselves? This came up in a recent discussion we had on the EBPDN as well as on the OMLC, again stemming from a discussion on what complexity means for implementation. Some people argue that meeting complexity means building more and more complex programmes (e.g. often referring to the ‘law of requisite variety’), but I don’t think so. Systems theory and the complexity sciences in terms of analytical tools give us ways of looking at the broad, interlinked patterns of behaviour in a system but this does not mean necessarily that we need to try to address all of the different issues.

Basic organisational science and human psychology say that it’s important to have some kind of understandable structure to bind together a team or an organisation, and these will inevitably be based on a simplification of reality. As Bob pointed out in our discussion, interventions are likely to require a mix of tools, which may in turn fix some elements and leave some open:

It is not correct to assume that situations that display complex behaviours cannot be managed, in part, by tools that are associated with managing simple behaviours. That’s because any situation will have elements within it that are relatively simple. Caring for a child is a complex endeavour, but responding to the need for a nappy change is simple. Re-establishing a community devastated by a tsunami is a complex endeavour but replacing a destroyed building is mostly relatively simple. For those simple aspects it is highly inefficient and unnecessary to use management tools associated with managing more complex elements of a situation.

It would be hard to get anywhere without some level of simplification and formalisation. The important question is WHAT aspects to fix, to simplify, and what to leave loose and open. And for me, that means the careful application of principles of complexity to those aspects of your work which are complex. And it is HERE that we will get into some really meaty discussions, decisions and arguments. Principles and approaches for complexity are clearly and concretely different from the approaches proposed by others (unlike the ‘horses for courses’ position). And there needs urgently to be grounded and pragmatic discussions about when to use which.