Nayana Godamunne from CEPA agreed to interview the Executive Director from that organisation, Priyanthi Fernando, opening up their best practices and the challenges they face as an organisation that tries to find its way through research in a complex environment.
Posts tagged ‘EBPDN’
An interesting discussion about labels and frameworks has been going on in the EBPDN, that I think is worth blogging about.
A bit of the background: an EBPDN member sent an email related to RAPID’s ‘policy entrepreneur’ (that is the original idea from Simon Maxwell, by the way) and the Asia Foundation’s ‘development entrepreneur’ concepts. The label ‘entrepreneur’ led to a debate on the encroachment of the market in the ‘development sector’. I replied that in fact the word referred to being entrepreneurial rather than to an enterprise (corporation) but that we should be equally worried about the term ‘development’ (something masterfully discussed by Jonathan Tanner in a recent article for The Guardian -have a look at the comments, too).
Anyway, this got me thinking about labels and I happened to come across two videos that got me inspired to write a longer response to the discussion.
The first video is about PRINCE2. Like other project management tools it has become all-pervasive and, in my view, dangerous for the innovation that think tanks (and civil society organisations more generally) need. I am not against management; we need it. But I am concerned about a focus on process without space for thinking. This is a very useful video as it tells us a lot about the origins of the framework and its tools. Something similar has happened with the Logframe and other planning tools. I know that PWC is interviewing candidates for their international development division asking them if they are familiar with DFID’s LogFrame. This is what matters. Can you fill in the boxes to keep the client happy? We forget that the LogFrame is not the table but that it involves a series of much more interesting steps that can help us plan an intervention.
The second one is about the mind-numbing use of tools. Some of the best quotes from this one are worth repeating here:
- We used all those things we do in manufacturing but tried to apply them to services
- We introduced coloured bits of paper to try to make things more visual.
- Gave us a nice feeling that whoever had thought up these different procedures and forms had lots of knowledge and experience from previous work so we did not have to think for ourselves
- We spent so much time working on these tools .. And lost sight of what we were doing
- These tools stopped me from thinking for myself
The main lesson emerging from this is that when we adopt a framework of any kind we must be careful to know where it comes from and what it is supposed to do (and not do). The tool should not lead the way. You should never do step 3 after step 2 just because 3 comes after 2. You must have a better reason for it.
This is not to say that tools cannot help. Of course they can. If one is trying to fix a leaking pipe one needs a tool -and there is a process to do it. But we should not rely on a single tool and hope for the best. A hammer, for instance, will certainly not be of any help. We’ve got to think about what we are doing. Why are we fixing the leak, why are we screwing tight this and not that section of the pipe, why is this the right tool and not that one, what other tools could help me with the job if I could not find the right one, why is it a good idea to fix it in this particular way and not in any other way, etc? If I know how the system I am fixing works and why it is best to fix it in this or that way then I am more likely to be able to improvise is something goes wrong. I may also be able to innovate if I find a better way of doing things.
We’ve got to think. And asking questions is a very good way of encouraging thinking.
The same applies to frameworks, tools and labels. We must find out what they are for? Let’s not get too bogged down on a label that is just trying to save us from writing a paragraph when a couple of words can communicate the idea. I recognise, for instance, that ‘think tanks’ is a difficult label but it saves time and most of the people I talk to get it that I am not attempting to make them all look like Brookings. But I must recognise that the label is rather charged with huge baggage and this must be acknowledged. So I acknowledge it.
But let’s be careful about those that have as a clear intention to lead us towards a particular view of the world. Labels can be dangerous. They make it possible to sound as if we know what we are talking about without really knowing much about it. Emma Broadbent found this in her studies of the political economy of research uptake in Africa. There is a lot of talk of ‘evidence based policy’ but few really stop to consider if there is any evidence, what is it, how it is constructed, etc. I am appalled by the way in which NGOs and think tanks in this sector use the phrase ‘there is evidence’. They say thing like ‘X% of policymakers like policy briefs or use mobile phones to access information’; but should really have said ‘X% of the N policymakers we interviewed in ABC country and that we contacted via our local partner like policy briefs or use mobile phones to access information’. This carelessness in the use of the label of evidence is an example of how a label can lose all meaning.
Evidence based policy itself is a rich label with a rich and long history that often goes unexplored. This idea, making use of several metaphors, has driven the formation and development of think tanks for at least a century.
There are others of course. I mentioned the label ‘development’. Development policy is how developed countries call the policies of developing countries. But these are no different to the policies in developed countries. The right term is policy or, is we are more specific, health policy, education policy, economic policy, etc. Why is it that developing countries ‘develop’ while developed countries ‘progress’? In my mind this is intended to create a boundary between the development industry and the rest -to keep the work within the industry, so to speak.
I am particularly concerned about the use of the phrase ‘marketplace of ideas’. This comes hand in hand with the use of ‘demand’ for and ‘supply’ of research. It has been quickly adopted by many in the sector without knowing where it came from. And where did it come? Well, is comes from the introduction of marketing to public and political life in the United States and the growth of the campaigning think tanks in the 1970s and 1980s.
Other more common ones such as participation or empowerment deserve our attention. NGOs and think tanks and their donors get quite exited about these terms. A few years ago at a meeting of NGOs in Uganda I was asked to help define the objectives of several policy advocacy projects. As I heard the objectives it became clear that this was a case in which a label was being used without thinking. ”We want to increase participation in education”. This, is meaningless. I wanted to know how this looks and so I asked, one by one, for clarification. Slowly we got there. Participation meant going to parent-teacher meetings; more meant an increase in about 50%; etc. With this we could work. With jargon it was difficult to know what had to be done.
With frameworks and tools (like one that I played a big role in developing: the RAPID Outcome Mapping Approach) we’ve got to pay particular attention to what they are for? What problem are they trying to solve? Are they useful for any problem? Are there other ways of solving different problems? You may also want to know who it came about; do they really know all of this or was it just a guess?
A tool like the Alignment, Interest, and Influence Matrix does not replace good old-fashioned political economy analysis, interviews, etc. to find out who are the main players and their positions, interests, etc. I can guarantee anyone trying to use it that without this proper background research (or knowledge) the tool will be useless. And a waste of time for those coming together to use it.
I have played a big role in developing the RAPID Outcome Mapping Approach, the Network Functions Approach, the Alignment, Interest, and Influence Matrix, and others. I am proud of them. But I am also critical of them and recognise their shortcomings.
However, I must admit that I’ve only recently had time to reflect on this properly. In writing a chapter on the history of the framework for a book that the RAPID team is putting together I realised that the framework itself is more the culmination of a somewhat chaotic process than a planned well-though one. In fact its best bits emerged out of the challenges posed by creative and critical partners (such as Vanesa Weyrauch at CIPPEC), new staff (such as Jeff Knezovich who forced us to think about how it connected to communications, or Simon Hearn on networks, etc.) or when we learned from other approaches (mainly Outcome Mapping). In fact, the most fun and the most interesting workshops happened when Ajoy Datta and I worked with ODI’s communications team (Nick Scott, Carolina Kern, Jeff, Angie Hawke, Leah Kreitzman) because they presented us with challenges that we could not possibly dismiss.
The best experiences I have had working with think tanks have been the ones where I have been challenged and am forced to rethink my advice. When they look at my ‘formula’ and question it I have to come back with a better idea (and this is why I often work with them for free -if I can afford it, unfortunately). Every workshop and change process needs a group of people who are sceptical. A group that is keen to question everything ‘the expert’ says. Of course I find them annoying (who wouldn’t?) but they are the ones I really end up working for and keep in touch after the project or workshop. The rest is usually just sitting there trying to learn the jargon. And, if I may be blunt, wasting their time and their funders money.
As the ROMA framework settled and fewer people challenged it I realised that we were just delivering it from memory. I remember once getting up 15 minutes before having to deliver half a morning workshop (I overslept, this is not my usual way of doing things) but still delivered it and nobody noticed. Neither did I, by the way. I talked through the presentation and jokes that I add here and there, facilitated the group work and even kept time without missing a second, answered questions, and stuck things on the wall -all on autopilot. I should have known this was a bad sign. Luckily, the communications team was around to challenge this and make it interesting again.
Over the years I have noticed that fewer people challenge me at ROMA, OM or other related workshops. The assumption that ‘he must know what he is talking about’ sets in more quickly than before. The use of terms/phrases now common (policy entrepreneur, bridging research and policy, outcomes, behaviours, the context matters, k-brokers/intermediaries/etc, etc.) helps to create this illusion of expertise and knowledge. These are the powerful narratives that Vanesa Weyrauch talked about in an email to the EBPDN:
As Enrique affirms, we should never underestimate the power of narratives (Roe and Fisher are excellent sources to further this reflection). In fact, I believe this [the EBPDN] is a venue to explore with much more depth when we are jointly discussing on how and why research should (or should not) interact with policy.
Here is a facilitation tip for all of you: if you sound as if you know what you are talking about people will (more often than not) believe you. If you give them options (e.g. What do you think of this way of organising the group work? Do you prefer another way?) you’ll lose control and get little done (unless this is in fact your objective). At workshops, most people go to be told things and are in a ‘ready to do what they say’ mode. Very few people remain slightly sceptical and challenge you along the way. They are priceless.
Facilitation tip number two: find these outliers and do the workshop for them. They will come up to you at the end and ask the intelligent questions.
The consequence of not thinking about the frameworks and tools we use (both as a provider and a recipient of them) is that we never really understand them (often there is nothing to understand, unfortunately) but think we do because we are busy filling in boxes or sticking coloured papers on walls. I’d be rich if I got .. er.. maybe 1000 dollars (not that many people have said this to me, but quite a few) for every time someone has told me that OM is easier because it is more flexible than the LogFrame. No, no, no! It is more difficult BECAUSE it is more flexible. Flexibility demands thinking (lots of it) and being on the ball, more monitoring, more time spent reflecting, etc. But they say this because they were focused on memorising the steps and tools and not on the idea.
The idea (ideas, really) underpinning OM took me quite some time to get. I’d say that at least two or three years until I was confident that I understood what Sarah Earl, Terry Smutylo and Fred Carden had been thinking about when they put it all together. And to really understand it I had to plan and deliver several workshops in my own words and with my own slides, answer questions I did not know the answer to, rethink my sessions, try them again, apply the methodology, fail, learn from these mistakes, etc. And even now, I need to have the occasional chat with Simon Hearn or read through the Outcome Mapping Learning Community’s discussions. A toolkit won’t do; a workshop is never enough.
I blame this dumbing down, in part, the rise of development studies (and related) programmes. They are too general in nature, focus most of all on jargon and the architecture of the industry, and offer little if any opportunity for technical specialisation. (I have first hand experience.) I continue to hold the view that we need more people with clear disciplines (lawyers, medical practitioners, economists, engineers, sociologists, anthropologists, historians, political scientists, etc.) and fewer with generalist and simply managerial skills (although these can be useful skills). I rather entrust my aid funding to a young smart anthropologist than to a PRINCE2 guru any day. The former will ask questions before getting on with it, the latter will be too busy filling templates and putting together a project structure that will spend half the budget before anything is actually done.
My advice to anyone wanting to join the industry is to study an established profession or discipline and apply it to a developing country context. If they are smart enough they will figure out the differences in context. And for employers: don’t look for ‘development studies’, hire instead economists, historians, astrophysicists, engineers, mathematicians, philosophers, anthropologists, linguists, etc. (come to think about it, that was the composition of my team at RAPID).
Anyway, back to the point: labels, tools and frameworks can prevent us from thinking. They can make us sound smart and competent when we really have no idea what we are talking about. They make us look busy and therefore appear and feel valuable.
Be careful of donors, think tanks, NGOs and consultancies brandishing frameworks and tools. Ask instead: What is your idea? If they can’t explain it then you’ll know their framework is meaningless; and if they can, well, maybe you won’t need a framework after all.
[Editor's note: Vanesa Weyrauch is the Principal Researcher of the Influence, Monitoring and Evaluation Programme at CIPPEC, which she created and has led since 2006. She founded a network of leading think tanks in Latin America, with the support of GDNet, Directores Ejecutivos de America Latina (DEAL). This post is a response to Developing research communication capacity: lessons from recent experiences and can be read alongside Caroline Cassidy's own post]
I am not an expert on capacity development per se but I’ve been a practitioner from CIPPEC of a combination of activities during the last years in Latin America which have helped me reflect and learn on what seems more promising in terms of helping others improve the way they try to influence policy through research. Much of this work has been performed under EBPDN LA with ODI and most of it under the program “Spaces for engagement” with GDNet, a strategic partner for us in this field.
This is why Enrique’s and Martine’s review findings from a recent communications project that RAPID and INASP worked on for IDRC have highly raised my attention. Just in time, I thought. After a couple of years of trying out several ways of improving what we know about policy influence (combining formal training workshops with online courses, technical assistance to think tanks, design of an innovative M&E methodology focused on policy influence, etc.) we have decided at CIPPEC to develop a capacity building strategy for 2013-2018 that allows us to be more strategic in the way we use our limited resources to assist policy research organisations and its leaders to enhance policy impact.
Firstly, I believe that some responses to questions posed by Enrique in his previous post and some of his recommendations may vary according to each type of individual/organisation taking part of the initiative (they tend to be very heterogeneous, which on the one hand enriches the exercise but on the other hand makes it extremely hard to please all participants). At CIPPEC we had experiences in training networks on these issues and even though they might share beliefs, values, research projects, etc, each network member had very different capacities and interests in research communications revealed in a pre-diagnosis. So how do we deal with this when resources are scarce? Ideally we would have all the time and resources to work both in groups and individually to support real change processes with viable plans and enable cross-learning but this is not the case in most of the cases. Therefore we face the challenge to make the most of what is available; smart decisions that use the evidence shared by Enrique and our own experience are crucial then.
Another key and related decision is whether those offering the support aim at training individuals and/or organisations. Strategies to do so will differ significantly and it is extremely difficult to make these decisions at the very beginning of a capacity building project and when there is a diverse group that will take part of it.
Finally, another tricky but very profound question is: How do we monitor and evaluate these efforts? How do we know if and how we have contributed to developing this sort of capacities? I agree that stand-alone workshops are not then most desirable strategy but I’ve heard/seen persons and organisations making a big change after attending one where excellent trainers have been able to raise their awareness on these issues and spearheaded the right questions at the individual/organisational levels. Thus, what are we aiming at and how we will know if we’ve done good?
An excellent paper that has significantly influenced how I think about all these issues and how we plan to further develop our capacity to build capacity at CIPPEC is “Learning purposefully in capacity development. Why, what and when to measure?” by Peter Taylor and Alfredo Ortiz. We need to develop new thinking about these issues and this paper triggers this type of thinking for all of us: donors, “trainers”, “trainees”. As we titled one of our handbooks, I believe we are all a little bit of learners, teachers and practitioners. That’s why ways to generate and share knowledge are increasingly horizontal! For us, online courses have enlarged the opportunity to make this happen as the knowledge is shared and discusses between peers and colleagues. What participants of our courses ask, the reflections they make, the real live examples they share, have all largely enhanced the knowledge we share in the next edition of the same course.
Finally, I am more and more convinced of the value of constant cross-fertilisation between theory and practice (and that’s what we’ve tried to do in our work all these years) which will significantly affect what we consider valuable knowledge and how we share it. Sir Ken Robinson has very effectively conveyed the importance of rethinking education: there is a real and increasing call for creativity in the way we treat knowledge. For this, group learning is key, collaboration is the strategy for defining crucial questions and co-building the answers. Spaces -like this blog- where we can share what we know and don’t know about the topics that we are passionate about are a promising sign of how capacity of all-teachers, learners and practitioners (which are changing roles for all individuals) can be further developed.
[Editor's note: If you'd like to join the conversation with a post of your own, please send us an email or tweet. See the original post for inspiration: Developing research communication capacity: lessons from recent experiences]
On the 11th-12th August, onthinktanks, CIES, ODI, the Evidence based in Development Network, CIPPEC, Grupo FARO, GDNet, and IDRC’s Think Tank Initiative co-organised a meeting of think tanks from all over Latin America. At the meeting were present the directors and/or deputies of at least 30 think tanks who came together to share and learn about the business of running these kinds of organisations.
On the 11th, the book: Vinculos entre conocimiento y politica: el rol de la investigacion en el debate publico en America Latina was launched with the participation of Enrique Mendizabal and Norma Correa (editors), Martin Tanaka and Mercedes Botto (authors), and Antonio Romero from the Think Tank Initiative commenting. The book follows from a study edited by Enrique Mendizabal and Kristen Sample on the relationship between think tanks and political parties in Latin America.
You can read the book below (I hope it will be soon published in English):
The book’s outline:
Investigadores, políticos, funcionarios públicos y periodistas en América Latina: en busca de una gran conversación. Norma Correa Aste (Pontificia Universidad Católica del Perú) y Enrique Mendizabal (onthinktanks.org)
PRIMERA SECCIÓN: Estudios Marco
La relación entre Investigación y políticas públicas en América Latina: un análisis exploratorio Martín Tanaka, Rodrigo Barrenechea y Jorge Morel (Instituto de Estudios Peruanos, Perú)
ThInk tanks en América Latina: radiografía comparada de un nuevo actor político Mercedes Botto (Facultad Latinoamericana de Ciencias Sociales, Argentina)
El rol del estado en el financiamiento de la Investigación sobre políticas públicas en América Latina Martín Lardone y Marcos Roggero (Universidad Católica de Córdoba, Argentina)
SEGUNDA SECCIÓN: Estudios de Caso
Una extraña pareja. relación entre los medios de comunicación y los centros de Investtigación en políticas públicas Ricardo Uceda (Instituto Prensa y Sociedad, Perú)
Medios de comuniación y uso de la Investigación en polítIias públicas en América Latina casos: Clarín (Argentina), el Diario de Hoy (El Salvador) y la Jornada (México), para el período marzo-abril 2010
Pablo Livszyc y Natalia Romé
(Instituto para la Participación y el Desarrollo, Argentina)
Think Tanks: los medios de poder en la Bolivia de Evo Morales Rafael Loayza Bueno (Universidad Mayor de San Andrés y Universidad Católica San Pablo, Bolivia)
El rol de la evidencia sobre políticas públicas en contextos de polarización: el conflicto por los derechos de exportación en la Argentina Tomás Garzón de la Roza
(Universidad Austral, Argentina)
TERCERA SECCIÓN: Balance y agenda de investigación
Estructuras políticas y uso de la Investigación en las políticas públicas. Método e hipótesis para una agenda de Investigación Adolfo Garcé
(Universidad de la República, Uruguay)
More than 40 representatives of Latin American think tanks met in Lima between the 11-12th August to share new research and experiences.
The full programme in spanish is here and below is an outline of the event and key links to the resources (in Spanish).
Harry Jones, from the RAPID Programme, summarises and comments on a series of discussions that his work on complexity is generating. He tackles the ‘horses for courses’ argument addressing two important questions. Firstly, and most obviously: how do you choose the right horse for your course? Whether we’re talking about policy instruments, evaluation methods, or gambling on horses this will never be an easy question. Secondly: what are we arguing against? Implicitly ‘horses for courses’ is cast against a ‘blueprint approach’, where a few standardised solutions (whether tools, methods, or more generally types of programmes) are rolled out to be implemented in diverse contexts irrespective of contexts.