Skip to content

Posts tagged ‘EBPDN’

The onthinktanks interview: Priyanthi Fernando

Nayana Godamunne from CEPA agreed to interview the Executive Director from that organisation, Priyanthi Fernando, opening up their best practices and the challenges they face as an organisation that tries to find its way through research in a complex environment.

Read more

Labels, frameworks and tools: do they stop us from thinking?

An interesting discussion about labels and frameworks has been going on in the EBPDN, that I think is worth blogging about.

A bit of the background: an EBPDN member sent an email related to RAPID’s ‘policy entrepreneur’ (that is the original idea from Simon Maxwell, by the way) and the Asia Foundation’s ‘development entrepreneur’ concepts. The label ‘entrepreneur’ led to a debate on the encroachment of the market in the ‘development sector’. I replied that in fact the word referred to being entrepreneurial rather than to an enterprise (corporation) but that we should be equally worried about the term ‘development’ (something masterfully discussed by Jonathan Tanner in a recent article for The Guardian -have a look at the comments, too).

Anyway, this got me thinking about labels and I happened to come across two videos that got me inspired to write a longer response to the discussion.

The first video is about PRINCE2. Like other project management tools it has become all-pervasive and, in my view, dangerous for the innovation that think tanks (and civil society organisations more generally) need. I am not against management; we need it. But I am concerned about a focus on process without space for thinking.  This is a very useful video as it tells us a lot about the origins of the framework and its tools. Something similar has happened with the Logframe and other planning tools. I know that PWC is interviewing candidates for their international development division asking them if they are familiar with DFID’s LogFrame. This is what matters. Can you fill in the boxes to keep the client happy? We forget that the LogFrame is not the table but that it involves a series of much more interesting steps that can help us plan an intervention.

The second one is about the mind-numbing use of tools. Some of the best quotes from this one are worth repeating here:

  • We used all those things we do in manufacturing but tried to apply them to services
  • We introduced coloured bits of paper to try to make things more visual.
  • Gave us a nice feeling that whoever had thought up these different procedures and forms had lots of knowledge and experience from previous work so we did not have to think for ourselves
  • We spent so much time working on these tools .. And lost sight of what we were doing
  • These tools stopped me from thinking for myself

The main lesson emerging from this is that when we adopt a framework of any kind we must be careful to know where it comes from and what it is supposed to do (and not do). The tool should not lead the way. You should never do step 3 after step 2 just because 3 comes after 2. You must have a better reason for it.

This is not to say that tools cannot help. Of course they can. If one is trying to fix a leaking pipe one needs a tool -and there is a process to do it. But we should not rely on a single tool and hope for the best. A hammer, for instance, will certainly not be of any help. We’ve got to think about what we are doing. Why are we fixing the leak, why are we screwing tight this and not that section of the pipe, why is this the right tool and not that one, what other tools could help me with the job if I could not find the right one, why is it a good idea to fix it in this particular way and not in any other way, etc? If I know how the system I am fixing works and why it is best to fix it in this or that way then I am more likely to be able to improvise is something goes wrong. I may also be able to innovate if I find a better way of doing things.

We’ve got to think. And asking questions is a very good way of encouraging thinking.

The same applies to frameworks, tools and labels. We must find out what they are for? Let’s not get too bogged down on a label that is just trying to save us from writing a paragraph when a couple of words can communicate the idea. I recognise, for instance, that ‘think tanks’ is a difficult label but it saves time and most of the people I talk to get it that I am not attempting to make them all look like Brookings. But I must recognise that the label is rather charged with huge baggage and this must be acknowledged. So I acknowledge it.

But let’s be careful about those that have as a clear intention to lead us towards a particular view of the world. Labels can be dangerous. They make it possible to sound as if we know what we are talking about without really knowing much about it. Emma Broadbent found this in her studies of the political economy of research uptake in Africa. There is a lot of talk of ‘evidence based policy’ but few really stop to consider if there is any evidence, what is it, how it is constructed, etc. I am appalled by the way in which NGOs and think tanks in this sector use the phrase ‘there is evidence’. They say thing like ‘X% of policymakers like policy briefs or use mobile phones to access information'; but should really have said ‘X% of the N policymakers we interviewed in ABC country and that we contacted via our local partner like policy briefs or use mobile phones to access information’. This carelessness in the use of the label of evidence is an example of how a label can lose all meaning.

Evidence based policy itself is a rich label with a rich and long history that often goes unexplored. This idea, making use of several metaphors, has driven the formation and development of think tanks for at least a century.

There are others of course. I mentioned the label ‘development’. Development policy is how developed countries call the policies of developing countries. But these are no different to the policies in developed countries. The right term is policy or, is we are more specific, health policy, education policy, economic policy, etc. Why is it that developing countries ‘develop’ while developed countries ‘progress’? In my mind this is intended to create a boundary between the development industry and the rest -to keep the work within the industry, so to speak.

I am particularly concerned about the use of the phrase ‘marketplace of ideas’. This comes hand in hand with the use of ‘demand’ for and ‘supply’ of research. It has been quickly adopted by many in the sector without knowing where it came from. And where did it come? Well, is comes from the introduction of marketing to public and political life in the United States and the growth of the campaigning think tanks in the 1970s and 1980s.

Other more common ones such as participation or empowerment deserve our attention. NGOs and think tanks and their donors get quite exited about these terms. A few years ago at a meeting of NGOs in Uganda I was asked to help define the objectives of several policy advocacy projects. As I heard the objectives it became clear that this was a case in which a label was being used without thinking.  “We want to increase participation in education”. This, is meaningless. I wanted to know how this looks and so I asked, one by one, for clarification. Slowly we got there. Participation meant going to parent-teacher meetings; more meant an increase in about 50%; etc. With this we could work. With jargon it was difficult to know what had to be done.

With frameworks and tools (like one that I played a big role in developing: the RAPID Outcome Mapping Approach) we’ve got to pay particular attention to what they are for? What problem are they trying to solve? Are they useful for any problem? Are there other ways of solving different problems? You may also want to know who it came about; do they really know all of this or was it just a guess?

A tool like the Alignment, Interest, and Influence Matrix does not replace good old-fashioned political economy analysis, interviews, etc. to find out who are the main players and their positions, interests, etc. I can guarantee anyone trying to use it that without this proper background research (or knowledge) the tool will be useless. And a waste of time for those coming together to use it.

I have played a big role in developing the RAPID Outcome Mapping Approach, the Network Functions Approach, the Alignment, Interest, and Influence Matrix, and others. I am proud of them. But I am also critical of them and recognise their shortcomings.

However, I must admit that I’ve only recently had time to reflect on this properly. In writing a chapter on the history of the framework for a book that the RAPID team is putting together I realised that the framework itself is more the culmination of a somewhat chaotic process than a planned well-though one. In fact its best bits emerged out of the challenges posed by creative and critical partners (such as Vanesa Weyrauch at CIPPEC), new staff (such as Jeff Knezovich who forced us to think about how it connected to communications, or Simon Hearn on networks, etc.) or when we learned from other approaches (mainly Outcome Mapping). In fact, the most fun and the most interesting workshops happened when Ajoy Datta and I worked with ODI’s communications team (Nick Scott, Carolina Kern, Jeff, Angie Hawke, Leah Kreitzman) because they presented us with challenges that we could not possibly dismiss.

The best experiences I have had working with think tanks have been the ones where I have been challenged and am forced to rethink my advice. When they look at my ‘formula’ and question it I have to come back with a better idea (and this is why I often work with them for free -if I can afford it, unfortunately). Every workshop and change process needs a group of people who are sceptical. A group that is keen to question everything ‘the expert’ says. Of course I find them annoying (who wouldn’t?) but they are the ones I really end up working for and keep in touch after the project or workshop. The rest is usually just sitting there trying to learn the jargon. And, if I may be blunt, wasting their time and their funders money.

As the ROMA framework settled and fewer people challenged it I realised that we were just delivering it from memory. I remember once getting up 15 minutes before having to deliver half a morning workshop (I overslept, this is not my usual way of doing things) but still delivered it and nobody noticed. Neither did I, by the way. I talked through the presentation and jokes that I add here and there, facilitated the group work and even kept time without missing a second, answered questions, and stuck things on the wall -all on autopilot. I should have known this was a bad sign. Luckily, the communications team was around to challenge this and make it interesting again.

Over the years I have noticed that fewer people challenge me at ROMA, OM or other related workshops. The assumption that ‘he must know what he is talking about’ sets in more quickly than before. The use of terms/phrases now common (policy entrepreneur, bridging research and policy, outcomes, behaviours, the context matters, k-brokers/intermediaries/etc, etc.) helps to create this illusion of expertise and knowledge. These are the powerful narratives that Vanesa Weyrauch talked about in an email to the EBPDN:

As Enrique affirms, we should never underestimate the power of narratives (Roe and Fisher are excellent sources to further this reflection). In fact, I believe this [the EBPDN] is a venue to explore with much more depth when we are jointly discussing on how and why research should (or should not) interact with policy.

Here is a facilitation tip for all of you: if you sound as if you know what you are talking about people will (more often than not) believe you. If you give them options (e.g. What do you think of this way of organising the group work? Do you prefer another way?) you’ll lose control and get little done (unless this is in fact your objective). At workshops, most people go to be told things and are in a ‘ready to do what they say’ mode. Very few people remain slightly sceptical and challenge you along the way. They are priceless.

Facilitation tip number two: find these outliers and do the workshop for them. They will come up to you at the end and ask the intelligent questions.

The consequence of not thinking about the frameworks and tools we use (both as a provider and a recipient of them) is that we never really understand them (often there is nothing to understand, unfortunately) but think we do because we are busy filling in boxes or sticking coloured papers on walls. I’d be rich if I got .. er.. maybe 1000 dollars (not that many people have said this to me, but quite a few) for every time someone has told me that OM is easier because it is more flexible than the LogFrame. No, no, no! It is more difficult BECAUSE it is more flexible. Flexibility demands thinking (lots of it) and being on the ball, more monitoring, more time spent reflecting, etc. But they say this because they were focused on memorising the steps and tools and not on the idea.

The idea (ideas, really) underpinning OM took me quite some time to get. I’d say that at least two or three years until I was confident that I understood what Sarah Earl, Terry Smutylo and Fred Carden had been thinking about when they put it all together. And to really understand it I had to plan and deliver several workshops in my own words and with my own slides, answer questions I did not know the answer to, rethink my sessions, try them again, apply the methodology, fail, learn from these mistakes, etc. And even now, I need to have the occasional chat with Simon Hearn or read through the Outcome Mapping Learning Community’s discussions. A toolkit won’t do; a workshop is never enough.

I blame this dumbing down, in part, the rise of development studies (and related) programmes. They are too general in nature, focus most of all on jargon and the architecture of the industry, and offer little if any opportunity for technical specialisation. (I have first hand experience.) I continue to hold the view that we need more people with clear disciplines (lawyers, medical practitioners, economists, engineers, sociologists, anthropologists, historians, political scientists, etc.) and fewer with generalist and simply managerial skills (although these can be useful skills). I rather entrust my aid funding to a young smart anthropologist than to a PRINCE2 guru any day. The former will ask questions before getting on with it, the latter will be too busy filling templates and putting together a project structure that will spend half the budget before anything is actually done.

My advice to anyone wanting to join the industry is to study an established profession or discipline and apply it to a developing country context. If they are smart enough they will figure out the differences in context. And for employers: don’t look for ‘development studies’, hire instead economists, historians, astrophysicists, engineers, mathematicians, philosophers, anthropologists, linguists, etc. (come to think about it, that was the composition of my team at RAPID).

Anyway, back to the point: labels, tools and frameworks can prevent us from thinking. They can make us sound smart and competent when we really have no idea what we are talking about. They make us look busy and therefore appear and feel valuable.

Be careful of donors, think tanks, NGOs and consultancies brandishing frameworks and tools. Ask instead: What is your idea? If they can’t explain it then you’ll know their framework is meaningless; and if they can, well, maybe you won’t need a framework after all.

Capacity development: it is time to think about new ways of generating and sharing knowledge

[Editor's note: Vanesa Weyrauch is the Principal Researcher of the Influence, Monitoring and Evaluation Programme at CIPPEC, which she created and has led since 2006. She founded a network of leading think tanks in Latin America, with the support of  GDNet, Directores Ejecutivos de America Latina (DEAL). This post is a response to Developing research communication capacity: lessons from recent experiences and can be read alongside Caroline Cassidy's own post]

I am not an expert on capacity development per se but I’ve been a practitioner from CIPPEC of a combination of activities during the last years in Latin America which have helped me reflect and learn on what seems more promising in terms of helping others improve the way they try to influence policy through research. Much of this work has been performed under EBPDN LA with ODI and most of it under the program “Spaces for engagement” with GDNet, a strategic partner for us in this field.

This is why Enrique’s and Martine’s review findings from a recent communications project that RAPID and INASP worked on for IDRC have highly raised my attention. Just in time, I thought. After a couple of years of trying out several ways of improving what we know about policy influence (combining formal training workshops with online courses, technical assistance to think tanks, design of an innovative M&E methodology focused on policy influence, etc.) we have decided at CIPPEC to develop a capacity building strategy for 2013-2018 that allows us to be more strategic in the way we use our limited resources to assist policy research organisations and its leaders to enhance policy impact.

Firstly, I believe that some responses to questions posed by Enrique in his previous post and some of his recommendations may vary according to each type of individual/organisation taking part of the initiative (they tend to be very heterogeneous, which on the one hand enriches the exercise but on the other hand makes it extremely hard to please all participants). At CIPPEC we had experiences in training networks on these issues and even though they might share beliefs, values, research projects, etc, each network member had very different capacities and interests in research communications revealed in a pre-diagnosis. So how do we deal with this when resources are scarce? Ideally we would have all the time and resources to work both in groups and individually to support real change processes with viable plans and enable cross-learning but this is not the case in most of the cases. Therefore we face the challenge to make the most of what is available; smart decisions that use the evidence shared by Enrique and our own experience are crucial then.

Another key and related decision is whether those offering the support aim at training individuals and/or organisations. Strategies to do so will differ significantly and it is extremely difficult to make these decisions at the very beginning of a capacity building project and when there is a diverse group that will take part of it.

Finally, another tricky but very profound question is: How do we monitor and evaluate these efforts? How do we know if and how we have contributed to developing this sort of capacities? I agree that stand-alone workshops are not then most desirable strategy but I’ve heard/seen persons and organisations making a big change after attending one where excellent trainers have been able to raise their awareness on these issues and spearheaded the right questions at the individual/organisational levels. Thus, what are we aiming at and how we will know if we’ve done good?

An excellent paper that has significantly influenced how I think about all these issues and how we plan to further develop our capacity to build capacity at CIPPEC is “Learning purposefully in capacity development. Why, what and when to measure?” by Peter Taylor and Alfredo Ortiz. We need to develop new thinking about these issues and this paper triggers this type of thinking for all of us: donors, “trainers”, “trainees”. As we titled one of our handbooks, I believe we are all a little bit of learners, teachers and practitioners. That’s why ways to generate and share knowledge are increasingly horizontal! For us, online courses have enlarged the opportunity to make this happen as the knowledge is shared and discusses between peers and colleagues. What participants of our courses ask, the reflections they make, the real live examples they share, have all largely enhanced the knowledge we share in the next edition of the same course.

Finally, I am more and more convinced of the value of constant cross-fertilisation between theory and practice (and that’s what we’ve tried to do in our work all these years) which will significantly affect what we consider valuable knowledge and how we share it. Sir Ken Robinson  has very effectively conveyed the importance of rethinking education: there is a real and increasing call for creativity in the way we treat knowledge. For this, group learning is key, collaboration is the strategy for defining crucial questions and co-building the answers. Spaces -like this blog- where we can share what we know and don’t know about the topics that we are passionate about are a promising sign of how capacity of all-teachers, learners and practitioners (which are changing roles for all individuals) can be further developed.

[Editor's note: If you'd like to join the conversation with a post of your own, please send us an email or tweet. See the original post for inspiration: Developing research communication capacity: lessons from recent experiences]

A new political economy of research uptake in Africa: overview

RAPID and Mwananchi have published a series of studies by Emma Broadbent on the political economy of research uptake in Africa.

A bit of background: The Evidence-based Policy in Development Network (ebpdn) was set up to promote our understanding of the role that evidence plays in policy-making in developing countries and in international development policy. Several studies and events have helped to shed light on the factors that explain the uptake of evidence; factors that the Research and Policy in Development Programme synthesised, in 2003: the political context, the nature and presentation of the evidence, links or networks, and the external environment.

However, the ebpdn’s attention shifted in the late 2000s from an effort to understand the complex linkages that exist between research and policy communities to one focused on recording good influencing practices or demonstrating the impact that particular pieces of research had on policy. Research project driven case studies became the rule. This shift, in the view of some, limited the opportunities for learning that RAPID’s original work had offered.

Partly in response to this development, while I was Head of the RAPID programme, we, in partnership with the ebpdn’s Africa network, launched a series of studies that sought to turn this trend around and pay greater attention to the nuances of the relationship between research and policy.

In 2009 a group of Latin American researchers worked on a book published by ODI and International IDEA on the relationships between think tanks and political parties. The studies recognised that it was not possible to study policy research institutes, or think tanks, without understanding their political contexts. The case studies from Colombia (Partidos políticos y think tanks en Colombia), Ecuador (Partidos políticos y think tanks en el Ecuador), Peru (Think tanks y partidos políticos en el Perú: precariedad institucional y redes informales), Bolivia (Partidos políticos y think tanks en Bolivia), and Chile (Los think tanks y su rol en la arena política chilena) illustrated the complexity of the relationship between research and policy, as well as between researchers and policymakers. The idea of two separate research and policy communities was discarded, and the importance of their historical co-evolution highlighted.

A series of background studies for Sub-Saharan Africa, South Asia, and East and Southeast Asia followed the Latin American studies; and these were followed by a more recent volume of studies that pays particular attention to the relationship between media and research centres in Latin America, promoted by ebpdn members in the region.

Inspired by this, Emma Broadbent took on the challenge to describe the relationship between think tanks and their environment in Sub-Sarahan Africa. Instead of focusing on an organisation or a piece of research, she took the policy debate as the unit of analysis.

Here, a policy debate is understood as a contested policy issue involving any number of actors who contribute to the debate by offering an argument relating to any aspect of the policy, for instance the policy problem, policy options, means of implementation and monitoring and evaluation. A policy debate can take place in a single space as a one-off event (in which case the number of participants is limited), or can occupy a limitless participatory space over a period of time. This paper is concerned with the latter.

Policy debates are often conducted with reference to political interests and faulty evidence, with each participant in a debate coming to the table with a particular ‘ask’ and understanding of the policy problem. Debates are thus unequal playing fields: they are made up of participants who possess varying objectives, expectations, capacities, understandings, motivations and commitment. Importantly, only some of these may be made explicit, given the potential for some actors not to think and act in a unified manner. For instance, actions may not reflect stated values, or stated intent may not accurately reflect actual intent.

Using research-based evidence as a starting point for a case study also hides the unavoidable fact that evidence does not mean the same to everyone. The label is often attached to a great deal of things: facts, opinions, arguments, and observations. Little is said about the perception that different policy players have of these different types of evidence or their source; even if, as we know and as Emma Broadbent’s studies show, this perception plays a significant role in explaining why certain ideas are more rapidly accepted than others.

Most importantly, the focus on research-into-policy case studies assumes that what matters in policy decisions are the facts and findings emerging from studies rather than the arguments that, by their nature, must draw from a range of sources of knowledge and power: values, tradition, legislation, fears, imaging, etc. Arguments and big ideas are what change the world. Facts and findings simply provide them with ammunition.

The studies discussed here focus on policy debates in four of the countries in which the Mwananchi programme, which provided support for the study, operates:

They offer an opportunity to address concerns about how evidence is used in policy-making. With the debates as a starting point, Broadbent tracked back the origin of the different arguments used by the various parties involved. She considered the different interpretations and sources of the evidence presented and employed by different actors; the roles that local and international policy actors play; and the specific and relative role played by research centres, researchers, and research-based evidence.

As a consequence, the case studies offer us a much richer description of the context, as well as ample opportunities to investigate further the complex relationship between research and policy.

It is worth reviewing the synthesis paper:  Politics of research-based evidence in African policy debates. Its main findings, conclusions and implications include:

  • Surprisingly (?), given the attention to make policy more evidence based, all four cases, specially in the Zambian one, the role of research based evidence in the policy debates was relatively high.
  • However, we should not overestimate the role of research based evidence:

Even when it is used, research is often poorly referenced and seemingly selective; the full implications of research findings are poorly understood; and the logical leap required to move from research cited in relation to a specific policy problem (e.g. HIV/AIDS transmission trends in Uganda) to the policy prescription or solution proposed (e.g. the criminalisation of HIV/AIDS transmission) is often vast. Sometimes, research-based evidence plays almost no role, and arguments on one or more sides of the debate are driven by personal prediction, assumption, reflection on past precedent and commitment to the idea of progress. The case studies each emphasise the role of different types of evidence, particularly that arising from citizens, or the grassroots.

  •  To assess it we should consider three types of factors that explain the role research based evidence plays:

Debate-specific factors, relating to the locus of a debate and the perceived existence of a policy debate;
Discursive and cognitive factors, relating to how policy debates are framed, how research and evidence are understood and research capacity at institutional level; and

Proximate, agency-oriented factors, relating to the political, tactical and strategic factors that intersect with the nature of the debate and the discursive and cognitive aspects of policy debates identified.

  • A key finding of the studies is that greater efforts need to be made to unpack what we understand by ‘evidence’ and recognise that we may not all be talking about the same thing. It may very well be that evidence is readily available but that, in fact, what is lacking is the capacity and incentives to use it. Not using certain evidence can be, in fact, a strategy -entirely logical if one recognises that policy processes are necessarily political:

However, when considering why the role of research-based evidence is smaller, this paper argues that this cannot be explained in terms of a ‘lack’ (of capacity, of research, of funding, of space for dialogue, of ownership) which can be filled (more capacity, more funding, more dialogue, better access to research); rather, it is not being used because there are significant incentives not to use it. Instrumentalisation of lack of capacity—which makes itself known in areas other than research–policy in Africa—thus describes a situation where there are significant advantages to a lack of capacity (assessed – in admittedly ill-defined – terms of the capacity to undertake, understand, and use research-based evidence), and/or significant disadvantages to improving this capacity (again, in this case, measured in terms of research-based evidence). The situation is thus sustained and in fact instrumentalised in order to fulfil a number of varied and interrelated objectives, including resistance to reform, the defence of national identity and autonomy and avoidance of scrutiny.

  •  Finally, the implications and recommendations of the research are relevant for think tanks and their funders (edited quotes):
    • Researchers are supported to promote ‘my’ research, with little acknowledgement of the inevitable political interests, constraints, pressures and incentives research is a product of, nor of its discursive context
    • Indeed, a more fruitful—and significantly more considerable—undertaking would be to turn our attention to improving the quality of policy debates to enhance the ability of people to discuss policy using critical thought.
    • A central part of any effort needs to address levels of understanding relating to research methodologies and the philosophy of science, in order to help users of evidence understand and appreciate the limitations of particular evidence and locate an approach to gathering evidence among wider discussions about what constitutes valid evidence and rigorous research.
    • Approaches to supporting ‘better’ policy debate would also include supporting the role of ‘mediators’ to analyse debates, thereby creating something of linearity in a debate in which evidence gaps can be identified and public demands for research-based evidence made and filled.
    • In some cases, what appears to be a lack of capacity to undertake, use and understand research-based evidence cannot be addressed purely through ‘more’: ‘more’ capacity, ‘more’ research and ‘more’ links between researchers and policymakers.

Many useful tools courtesy of the ebpdn

Clara Richards, the ebpdn coordinator, has put together a list of tools shared by its members over the last few months. I am taking the liberty to share them with a wider audience. Thank you Clara.

Organisational advocacy capacity tools

  • www.reflectlearn.org offers several tools to undertake organizational capacity assessments that you could use.

From previous discussion on organisational assessment:

  • Organizational Capacity Self-Assessment Tool Guidelines for Developing an Organizational Training Plan: This guide from the Academy for Educational Development, and funded by USAID, walks organizations through key items that should be in place to situate an organization for success with organizational training, including a self-reflective rubric for assessing capacity around each category (management & governance, human resources, managing activities, relationships, etc.)
  • Using Data to Improve Service Delivery: A Self-Evaluation Approach From the Center for Applied Research on Population and Development / USAID, the guide is aimed at development health workers wishing to improve services at their facilities. It follows a five-step approach of collecting data, analyzing data, evaluating the situation, finding a solution, and monitoring the resulting plan of action. Each step is explored including concrete advice and examples.

Latin American think tanks: using knowledge to inform public policy

A few months ago I reported on an event I helped organise in Lima on think tanks. It was hosted by CIES and co-organised by GDNet, the Think Tank Initiative, CIPPEC, Grupo FARO and the EBPDN. Micaela Pesantes has shared the event’s report with me (and I share it with you: in English, this time).

View this document on Scribd

New Book: Think Tanks, politics and the media in Latin America (in spanish)

On the 11th-12th August, onthinktanks, CIES, ODI, the Evidence based in Development Network, CIPPEC, Grupo FARO, GDNet, and IDRC’s Think Tank Initiative co-organised a meeting of think tanks from all over Latin America. At the meeting were present the directors and/or deputies of at least 30 think tanks who came together to share and learn about the business of running these kinds of organisations.

On the 11th, the book: Vinculos entre conocimiento y politica: el rol de la investigacion en el debate publico en America Latina was launched with the participation of Enrique Mendizabal and Norma Correa (editors), Martin Tanaka and Mercedes Botto (authors), and Antonio Romero from the Think Tank Initiative commenting. The book follows from a study edited by Enrique Mendizabal and Kristen Sample on the relationship between think tanks and political parties in Latin America.

You can read the book below (I hope it will be soon published in English):

View this document on Scribd

The book’s outline:

Contenido:

Capítulo  1

Investigadores, políticos, funcionarios públicos y periodistas en América Latina: en busca de una gran conversación. Norma Correa Aste (Pontificia Universidad Católica del Perú) y Enrique Mendizabal (onthinktanks.org)

PRIMERA SECCIÓN: Estudios Marco

Capítulo  2

La relación entre Investigación y políticas públicas en América Latina: un análisis exploratorio Martín Tanaka, Rodrigo Barrenechea y Jorge Morel (Instituto de Estudios Peruanos, Perú)

Capítulo  3

ThInk tanks en América Latina: radiografía comparada de un nuevo actor político Mercedes Botto (Facultad Latinoamericana de Ciencias Sociales, Argentina)

Capítulo 4

El rol del estado en el financiamiento de la Investigación sobre políticas públicas en América Latina Martín Lardone y Marcos Roggero (Universidad Católica de Córdoba, Argentina)

SEGUNDA SECCIÓN: Estudios de Caso

Capítulo 5

Una extraña pareja. relación entre los medios de comunicación y los centros de Investtigación en políticas públicas Ricardo Uceda (Instituto Prensa y Sociedad, Perú)

Capítulo 6

Medios de comuniación y uso de la Investigación en polítIias públicas en América Latina casos: Clarín (Argentina), el Diario de Hoy (El Salvador) y la Jornada (México), para el período marzo-abril 2010

Pablo Livszyc y Natalia Romé

(Instituto para la Participación y el Desarrollo, Argentina)

Capítulo 7

Think Tanks: los medios de poder en la Bolivia de Evo Morales Rafael Loayza Bueno (Universidad Mayor de San Andrés y Universidad Católica San Pablo, Bolivia)

Capítulo 8

El rol de la evidencia sobre políticas públicas en contextos de polarización: el conflicto por los derechos de exportación en la Argentina Tomás Garzón de la Roza

(Universidad Austral, Argentina)

TERCERA SECCIÓN: Balance y agenda de investigación

Capítulo 9

Estructuras políticas y uso de la Investigación en las políticas públicas. Método e hipótesis para una agenda de Investigación Adolfo Garcé

(Universidad de la República, Uruguay)

The online version of the book can be found here and hard-copies can be purchased from the Fondo Editorial de la Universidad del Pacifico (profits go towards publishing more books).

Think tanks in Latin America

More than 40 representatives of Latin American think tanks met in Lima between the 11-12th August to share new research and experiences.

The event was sponsored by the EBPDN, the Think Tank Initiative; and GDNet; and co-organised by ODI, CIES, Grupo FARO, CIPPEC, and onthinktanks.

The full programme in spanish is here and below is an outline of the event and key links to the resources (in Spanish).


Day 1
Panel: Lessons learned by think tanks in the region
  1. Politics moulded by research: the experience of the Fundacion Chile 21. Eugenio Rivera Urrutia Director of the Economic Programme
  2. Navigating politics with a long term objective. Fernando Straface, Executive Director of CIPPEC; Leandro Echt, Civil Society programme.
  3. Bringing research and public policy together: CIES’ experience. Javier Portocarrero, Executive Director.
  4. Centro Latinoamericano de Economía Humana (Uruguay) Leopoldo Font Director General
  5. FEDESARROLLO and Colombian public policy: Using knowledge for public policy influence.  Roberto Steiner
Panel: Financing and the media 
  1. The rol of the state in the financing of public policy research in Latin America. Martín Lardone and Marcos Roggero.
  2. The media and the use of public policy research in Latin America. Pablo Livszyc; Natalia Romé and INPADE –FOCO.
Lunch time talk: Research and Public Policy
  1. Mercedes Araoz, a researcher is also a former Minister of Trade, Minister of Production and Minister of Finance. Also former candidate for the Presidency. [see her Bio]
Book launch: Links between Research and Politics
  1. The links between knowledge and politics: the role of research in Latin American public debate


Day 2
Panel: Political influence during electoral periods 
  1. Influence during elections: Experience and lessons from the “Elecciones 2011 Centrando el Debate Electoral” project in Peru. Norma B. Correa Aste
  2. How to influence during electoral periods? Lessons from “Ciudadanizando las Políticas” Public policies, transparency, independence and social change. Andrea Ordóñez. Grupo FARO, Ecuador.
  3. Deliberative Electoral Strategies and Transition From Clientelism:  Experimental Evidence from Benin Leonard Wantchekon Princeton University.
  4. Influence in electoral processes: a Political perspective. Santiago Pedraglio
Panel: Monitoring and Evaluation
  1. Monitoring and evaluation of policy influence: it si time to get started. Vanesa Weyrauch, CIPPEC
  2. Evidence based policy influence -influence, monitoring and evaluation. Werner L. Hernani-Limarino Fundación ARU
Reflection and future collaboration 
  1. Conclusions of the workshop and suggestions for future collaboration

Key principles to guide policy influence and research uptake: a synthesis of an interesting discussion

Clara Richards has just posted a very good summary of a discussion that a few of us had on the EBPDN. I repost it here but if you would like to join the discussion you can, if you have not already, sign up to the community (only a few minor edits for consistency with this blog).

A few weeks ago, Jeff Knezovich shared a few principles of policy influence and research uptake. In search of stories and examples of influence in policies I’ve re-read this debate and found it, again, very interesting. I think it is worth wrapping up the discussion and giving a few conclusions.

The principles: Country driven, Two-way process, Objective led, Embedded in the research process, ‘Being there’, Accessible, Operating in complex environments, Reflective and adaptive, Internal and external. The hotprinciples that made more noise in this discussion were “being there”,” accessibility” and “country driven”.

The being there’ approach divided the waters into a more “technical/on line communications” debate and a more conceptual discussion. The technical group discussed (or better, shared knowledge) on what are the best channels for on-line communications, the difficulties of measuring the new ones – which we have much less control, such as Facebook, Twitter and posts in other sites. Andrew Clappison and Nick Scott exchanged some concerns, difficulties, ideas and pros and cons of some tools such as Hootsuit and Google Analytics. For further discussion Nick suggested to visit the ‘being there’ online communications for On Think Tanks which already has Part I and II and it has been published on ebpdn last week. It covers what ODI’s experiences – both good and bad. It also outlines how Nick has created a dashboard tool for ODI to collect statistics from those multiple sources as a first attempt to try and bring some order to ODI’s statistics and create a space for intelligent analysis of online communications. We will probably keep on learning a lot on this important topic.

The other group had a more “conceptual” discussion since this principle takes another (similar) meaning when we think of off-line communications. Francisco Perez and Enrique Mendizabal both agreed on the importance of talking face to face to those we want to communicate with. This is a huge challenge for ‘international’ initiative. Not just because the audiences / publics are not there with you in your office in London or Washington but because to talk to people face to face in a meaningful way one needs a lot more preparation than we tend to plan for: you cannot just fly in and expect to get anywhere (Enrique). This argument is also linked to the other hot principlesaccessibility and country driven.

No surprise these principles made so much noise. There is a huge concern on how international organisations are approaching these principles and some questions came up on this debate: Do [IO]  consider that in developing countries the majority of the people live in the village where power, newspaper, internet, television and radio is not fully accessible to majority? The strategy of country driven sounds good but what do you mean by country driven? By the government, NGOs or people from the grassroots level? The words country driven may be a good slogan but in reality is not applicable because some donors, NGOs and government (politicians) have different priorities as well as personal interests which do not put into consideration people’s priorities and best interests.

These questions also brought to the table the complexity of influencing policies. A shared concern by Andrew and Enrique was if whether the work of all of these organisations should be to try to set/inform the agendas ­rather than change this or that policy. Andrew says that perhaps we should be far more frank and realistic about ‘influence’ and focus a lot more on the next rung on the ladder down (i.e. interest groups/civil society and others)???

This is a complex discussion and by no means is going to bring an answer here but the debate itself is enriching and makes us stop for a moment in our busy lives to reflect and think about our job. By exchanging ideas, we try to improve the influence in policies, so thanks to all of you who have participated in this debate. Some conclusion came up to try to improve the utility of these principles:

  • The importance of having a LOCAL communication TEAMS
  • Turn around the initiatives: find out what already exists and get more ‘country driven’ initiatives instead of donor initiatives.
  • Get the different local publics talk to each other so people and communities gets the benefits
  • Use simple tools for complex problems
  • Be flexible and be able to readapt
  • Use more arguments, they are more efficient than the evidence
  • Consider lessons from other countries in relation to the roles played by the media and the private sector in particular
  • Map out the system
  • Consider incorporating a third set of relevant stakeholders
  • And the one I like best: Practice, practice, practice!

Some resources shared during the debate:

Programming for complexity: how to get past ‘horses for courses’

Harry Jones, from the RAPID Programme, summarises and comments on a series of discussions that his work on complexity is generating. He tackles the ‘horses for courses’ argument addressing two important questions. Firstly, and most obviously: how do you choose the right horse for your course? Whether we’re talking about policy instruments, evaluation methods, or gambling on horses this will never be an easy question. Secondly: what are we arguing against? Implicitly ‘horses for courses’ is cast against a ‘blueprint approach’, where a few standardised solutions (whether tools, methods, or more generally types of programmes) are rolled out to be implemented in diverse contexts irrespective of contexts.

Read more

Follow

Get every new post delivered to your Inbox.

Join 5,089 other followers