Skip to content

Posts tagged ‘RAPID’

A new political economy of research uptake: overview from studies in Latin America, Africa and Asia

This posts summarises a number of studies produced by ODI, IDEA International and others between 2009 and 2012 on the political economy of research uptake. It presents cases from South America, Africa and Asia, edited books and other resources.

Read more

The Power of Film: Turning your talking head video into a story

I want to explore how to take your next steps into the realm of online film to develop something more elaborate: to produce an online ‘story’. Getting this right is almost as important as who you pick to do your filming and is not always an easy process.

Read more

Research communications support: why do donors, think tanks and consultants keep making the same mistakes?

[Editor's note: Caroline Cassidy is the Research and Policy in Development (RAPID) Programme's Communication Officer. This post is a response to: Developing research communication capacity: lessons from recent experiences and can be read alongside Vanesa Weyrauch's own response -coming up this Wednesday]

Building capacity to develop research communications skills and competencies for policy influence is not a new thing. There are a multitude of players involved in the process who have been working in this area for years. And evaluating that capacity development is not really a new thing either. So why then should I be writing this blog if what I am about to say is nothing new? Because, despite clear recommendations for better support, time and time again, donors, think tanks and consultants keep coming up against the same challenges, leaving research communication to the end of the project, then getting caught up in a cycle of workshops and interventions that are unlikely to have the desired impact, and when researchers or teams are already looking to their next area of work.

I arrive at this type of capacity development from ODI’s Research and Policy in Development (RAPID) programme where I have been working with the team to build on ODI’s years of work helping to develop the capacity of researchers and organisations in a variety of contexts, to have impact in the policy realm. Enrique and Martine’s evaluation findings from a recent communications project that RAPID and INASP worked on for IDRC last year identify  some very interesting (though sadly not all new) issues that frequently surface when we do this type of work: contextual concerns – in a short space of time, can a consultant really get to the crux of the project without having a strong working knowledge of the context themself; support often comes at the end of a project so that therefore it feels like it is ‘tagged on’ as an extra dimension, rather than an integral one; and ensuring you have the right people in a team involved in the first place, who can benefit the most from the support.

One recommendation from Enrique and Martine that I don’t think we at ODI have seen before is assessing demand and talking directly to the grantees who need support before a contract is even signed, then deciding whether this capacity support should be provided and to whom. This is also related to another report lesson on researcher incentives and pressures beyond communications and the fact that many do not believe it is their role to engage at all – that it is someone else’s job. Therefore, assessing the demand and finding the right people within the organisation to work with as early as possible is absolutely critical, (and then re-evaluating this throughout the duration of the support, as circumstances alter). And if it looks as if it’s not going to have the necessary impact – consultants and think tanks should have the ability to just say no from the outset.

Yet, despite these and other well-established, clear and very sensible principles, there seem to be a few key confounding factors that often impede their implementation:

The first is funding; although there is a growing consensus of the importance of communicating research, funding for communication has undoubtedly suffered at the hands of the economic downturn and the growing ‘value for money’ agenda. It is not always seen as a major priority in the research cycle and often too closely, and even wrongly, associated with branding and marketing, rather than policy influence. Moreover, even in the communication arena donors often favour interventions that lead directly to visible outputs like, the workshop.

Secondly, as Enrique and Martine emphasise, there is often poor planning: donors and organisations realise quite late into a project and budget cycle that the teams need extra support in this area, but with not much time and little funding, a ‘quick’ workshop is often seen as an immediate ‘magic wand’.  As a blog by my colleague, Ajoy Datta highlights – workshops do give a good introduction to the topic and some initial support, but are unlikely to make a real impact once the participants have left the building.

I also think that there is still the misconception, at some levels, that researchers and teams shouldn’t be thinking about the communication of their work until later in the process or indeed towards the end. However, whoever leads on communications needs to engage with stakeholders as early as possible to ensure relationships are cemented and that ideally decision-makers have buy in.

And finally, well even if they could do all of the above, donors frequently do not have sufficiently flexible mechanisms and incentives to support a more appropriate response, as discussed in a recent ODI background note: Promoting evidence-based decision-making in development agencies.

So faced with all this doom and gloom, what can be done? While workshops can still be useful, in RAPID, we are now trying to incorporate them where possible, as part of a wider and longer involvement in a project, and one where ideally we are involved from the beginning. For example, we are currently working on a two year project with the International Initiative for Impact Evaluation (3ie) on communications support to their grantees, knowledge management development (at an organisational level) and another three year project on monitoring and evaluating grant policy influence. The latter is in consortium with three other regional organisations: CEPA (Sri Lanka), CIPPEC (Argentina) CommsConsult (Zimbabwe).  It is an exciting, though we recognise, rare opportunity to work at different organisational levels to do some thinking, develop tools, research and capacity work in a ‘quality learning laboratory’. Support will be provided by locally based teams working in context, prioritising face-to-face engagement (which does include workshops!), but also using online engagement where necessary. All of this will hopefully help to ensure better impact, longevity and buy in through stronger, more collaborative relationships between researchers and policy-makers, and from our side, better contextual knowledge.

And for other projects, where we are working with smaller organisations and donor budgets, we are trying to ensure that there is additional support around the workshops through mentoring, field trips, local partners and we will certainly take on board the recommendations put forward by Enrique and Martine.  And sharing evaluation findings in early discussions with donors can make a big difference. An organisation I am working with decided to implement more face-to-face support, because the donor read and assimilated the recommendations from another project evaluation report.

Communications capacity development is a constant learning process and there is no best-case, winning magic formula. But nor should there be – because good support is so dependent on the organisation, project, participants and the context, and just ‘shoehorning’ a ready-made approach or template is not going to work. This report contains some useful principles to guide new forms of support and to encourage donors, think tanks and consultants alike to not fall into the same traps of short-term support that frequently only deliver mediocre results. And above all, interventions are far more likely to become embedded into the life of a project (and hopefully beyond) if they are part of the project from the beginning and not left as an afterthought.

[Editor's note: Vanesa Weyrauch's response will come out on Wednesday but if you'd like to join the conversation with a post of your own, please send us an email or tweet. See the original post for inspiration: Developing research communication capacity: lessons from recent experiences]

RAPID Outcome Mapping Approach: more useful when used to plan the whole policy research initiative

(Please note that in this post I am referring to policy research initiatives or programmes: initiatives that have explicit policy influencing objectives.)

The RAPID Outcome Mapping Approach is a methodology that I helped develop while working for the RAPID Programme at the Overseas Development Institute. I think we made a mistake (well, more than one, but let me focus on this one today). RAPID has always been in high demand when it comes to helping policy research organisations and programmes to plan, monitor and evaluate policy research influencing strategies. It is (and I still am) called to this help after the overall policy research programme has been designed: the objectives (and logframes) have been decided and the contracts has been signed with the funder.

Our mistake was to pitch it this way. We accepted (or did not care to challenge) the idea that there were separate components: research, capacity building, ….., and policy influencing (which focused mainly on communications), and that it was the latter that ROMA could help with. We let the researchers deal with the research component and took it as a given. There is a reason for this. Historically, RAPID has been seen within ODI as non-research-based (even though its work is quite solidly based on a great deal of research) and so we chose to focus most of our attention away from discussions related to the panning of the research component. We assumed (and it could still work) that this was a safe way in. Unfortunately, researchers, under pressure from donors to focus more and more on communications, still protect the research component and shield it from approaches such as ROMA. It is my impression that they are willing to talk about policy influence, research uptake, communications, etc. that as long as the research component is not affected.

But this is their mistake. ROMA is not that useful when it is brought in after the research component and the programme’s objectives have been decided. ROMA (and other similar approaches) is much more useful when it is used to plan the entire programme: including the research component of a policy research programme.

ROMA is about critical thinking. That is all it is. It can be used in any situation (big or small) and circumstance because it facilitates a process of reflection about our context, organisations, skills, objectives, partners, audiences, tactics, tools, how to use them, why, etc. It helps us to explain why we are doing what we do -and check and re-check if it is the right thing to do as more information becomes available. Users go through a narrative that helps them to identify and define objectives, think about the policy (broad and narrow) context that affect them, identify the main players in this context and those that the programme may want to target, determine more specific objectives for each, consider various ways of achieving them, developing and choosing the most appropriate approaches, tactics and tools, etc.

Among these approaches, tactics and tools are the usual: media campaigns, training and education, digital communications, networking, …, and research. Yes, research. In a policy research programme, research (analysis, literature reviews, case studies, systematic reviews, impact evaluations, randomised control trials, clinical trials, etc.) is a component of the overall programme; just like communications, capacity building, networking, etc, are components of the programme, too. Hence new research, like some of the activities of the other components, is not indispensable. It very well be that it could be possible to affect policy by focusing on using existing research and just promoting a public debate on a policy issue; or by improving the capacity of governments to make more informed decisions; or creating formal links between policymakers and experts; etc.

Similarly, it very well be that new research is absolutely necessary. In these cases, however, the research design cannot happen in isolation of policy influencing considerations. What kind of research is the most appropriate? ROMA can help decide what kind of research might be more relevant or useful to achieve the programme’s objectives. What questions should it answer? ROMA can help decide what questions need to be answered to develop the arguments that may influence the programme’s audiences. Should it be done collaboratively? ROMA can help decide. Who should we collaborate with? ROMA can help. What should be the outputs (products) of these research projects? ROMA can help. It can even help us decide who should be the researchers. I recall a case when a minister told me that the government had no problem with the research methods and conclusions but could not really use findings from the researcher who had carried it out. In another case, ROMA helped to avoid this situation. I say ROMA but of course I mean ‘a planning methodology like ROMA.’

The problem is that all these questions are currently decided before a discussion about the context, audiences, policy objectives, and the other components of the programme is had. Even the proposal writing process (and I have participated in many of these) is compartmentalised and often separates research from communications from capacity building from M&E. Each section tends to be drafted separately and then put together a few days before the deadline, the logframes are prepared at the last-minute, and all is then submitted to the donor. And all this is done before any real analysis of the policy context has been undertaken. I know this because whenever we come in to help with policy influencing the first thing we do is ask about this; and the answer is often the same: no. But by then it is too late.

Here is what I propose:

  • Before developing a strategy the donor or the organisations bidding for the policy research programme should carry out a ‘baseline’ study of the policy they intend to affect. This could be a political economy analysis of the policy process, or a study of the discourses that shape it. It should identify the various players involved, their interests, objectives, their use of evidence (or not), networks, etc. AusAid has recently conducted a series of diagnostics of the knowledge sector in Indonesia that could serve as an example. The kind of studies that Emma Broadbent has carried out on policy debates is also relevant.
  • This should help to clarify the policy objectives for the entire programme; they will be based on a realistic assessment of the context. Everyone this days seems to be talking about Theories of Change -but few base them on sound theories of how change actually happens.
  • In turn, these should help to consider which players the programme is proposing to focus its attention on and how is it that it could influence them or contribute towards changing their policy behaviours. Contribution here is the key word.
  • This focus should also help to decide what may be the most appropriate approaches, tactics, and tools for the programme to employ. And this will include, possibly, a research component. Depending on the audiences and objectives this may be very theoretical, a bit more practical, quantitative, qualitative, participatory, etc.
  • The research component, when designed at this stage and not before, will benefit from having a baseline that explains what, how, and why research is used, a clear audience among the key policy players, clear objectives, and a good sense of what are the other approaches (tactics and tools) which will be able to support and use research. This research will be inevitably better linked to the whole programme and not an isolated component developed before anyone bothered to think about the context.
  • Once the strategy is developed, and only then, the right team can be assembled. Today, bids are put together after the programme ‘partners’ and staff have been identified. The right order, however, is to find the right organisations and people for the job. It should not matter if they are in someone else’s team. Imagine if you hired someone and then checked to see what they could and could not do. This is the same thing that happens now.
  • Finally, and key to all of this, the programme strategy should accept that this process needs to be repeated over and over again. As the programme is implemented new information will become available, new challenges will appear, new opportunities will unravel, etc. Therefore, new approaches may be more appropriate, new partners and staff may be needed, and old ones may have to be let go.

This is not advertising for ROMA. I do not really mind what planning approach is used. What I am arguing is that for policy research initiatives, planning research and planning policy influence should not separated.

Programming for complexity: how to get past ‘horses for courses’

Harry Jones, from the RAPID Programme, summarises and comments on a series of discussions that his work on complexity is generating. He tackles the ‘horses for courses’ argument addressing two important questions. Firstly, and most obviously: how do you choose the right horse for your course? Whether we’re talking about policy instruments, evaluation methods, or gambling on horses this will never be an easy question. Secondly: what are we arguing against? Implicitly ‘horses for courses’ is cast against a ‘blueprint approach’, where a few standardised solutions (whether tools, methods, or more generally types of programmes) are rolled out to be implemented in diverse contexts irrespective of contexts.

Read more

The onthinktanks interview: Simon Maxwell (Part 1)

Simon Maxwell, former director of the Overseas Development Institute, is now a Senior Research Associate of the ODI, he chairs the Climate Change and Development Knowledge Network and leads the European Development Cooperation Strengthening Programme. He is also currently Chair the World Economic Forum’s Global Agenda Council on Humanitarian Assistance. Other engagements include being a Trustee of the Fair Trade Foundation, a Member of the Policy Advisory Council of the Institute of Public Policy Research and Specialist Adviser to the House of Commons International Development Select Committee. But most crucially for onthinktanks, Simon led a significant drive of change within ODI during his tenure as Director.

In this conversation, Simon Maxwell (SM) and I (EM) discuss the current work and future prospects of think tanks. I must say that I found this exchange quite enlightening with respect to the way ODI had been evolving while I was a researcher there. Simon’s description of the reasons behind some of ODI’s most important choices in the last decade provide an insightful account into the type of decision making demanded from a Director.

Read the second part of this interview.

Enrique Mendizabal:   When I arrived at ODI in 2004, you welcomed me with a description of ODI as a think tank. However, I got a sense that most in ODI still saw the organisation as a research centre –a few even disliked the term think tank. What is the difference between a think tank and a research centre – is there any?

Simon Maxwell:  It’s interesting that people should have been questioning the direction of ODI and the mission statement, even as late as 2004! I think today there would be less debate, except perhaps as a running joke in the annual retreat – though I no longer work at ODI, so can’t verify the kitchen gossip.

Why did I use the term ‘think tank’ (and why did the Board sign off on the new mission statement in 1998?)? When I first joined, in 1997, I spent some time thinking about and asking people about the USP, the Unique Selling Point of ODI. I had come from the Institute of Development Studies, which often described itself as the national research centre on the topic. I knew just how many university departments, research institutes and centres in the UK worked on all our themes – rural development, international trade, aid, even humanitarian policy. What did we or could do that marked us out as different? One answer was that we were one of the few Institutes in Central London, so could capitalise on a geographic advantage. DFID was round the corner, Parliament was up the road. ODI had a great reputation for the quality of its research, but also for its Briefing Papers, meetings and parliamentary work. It seemed logical to develop that side of our business, well-described by the term think tank.

However, it was not a trivial change. As well as a new Mission Statement, it meant gradually learning to write our Business Plans in a different way, organise our budget differently, develop new competencies, and bench-mark ourselves against a different set of institutions, the London think tanks concerned with domestic policy, rather than the university departments concerned with development studies. We set out to do all that without undermining the quality of research. It was not a project you can ever declare finished, probably, but by the time I left, we were producing far more outputs directed to policy-makers, were organising something like 80 public meetings a year, and were working closely with parliament and with all the political parties.

In addition, we had a pioneering programme, of which you were part, researching and training on the link between research and policy, and on the role of think tanks. I’d also like to think that all those induction meetings, like the one I had with you, contributed to a gradual change in the culture of the organisation. Do you remember, I used to give everyone four pictures on their first day, and suggest they kept them under their pillows at night? The four pictures were of the four role models of think-tank work: Scheherezade, the story-teller; Paul Revere, the networker; Isambard Kingdom Brunel, the engineer; and Rasputin, the policy fixer.

EM:   Were there risks in this approach? For example, ODI describes itself as ‘independent’. But is there such a thing as neutrality when you are actively trying to change policy? I have always had the feeling that so-called independence or neutrality masks ideological beliefs that do exist among researchers within the organisation. Can we do research and promote poverty reduction while being ideologically neutral?

SM:  Yes, there were risks, not so much that we would be seen as carrying the flag for global poverty reduction, which was unlikely to make us many enemies, but mainly that we would be seen as party political. That would have been very dangerous for us, as a determinedly independent institute, living in a political system which sees regular change of ruling party – especially since a good chunk of our funding came, and comes, via contracts from the Government. ODI always says it doesn’t do advocacy, but of course, when researchers have carried out a piece of research, they reach a conclusion and want to see it implemented! We squared that circle in a number of ways. First, by saying that the institution did not take positions, but that individuals could. Second, by differentiating products, so that Briefing Papers were relatively neutral, but Opinion pieces more outspoken. Third, by avoiding overt political controversy, even at the cost, sometimes, of blunting our public messages. And fourth, by bringing all the main political parties onto the Board of ODI, as a guarantee of propriety. I made some mistakes by being too outspoken, especially at the beginning, but learned a lesson from a German counterpart, who said to me ‘everything can be said, but not everything can be said publicly’. Good advice.

EM:  You have always emphasised the policy influencing mission of think tanks.  International development think tanks seem obsessed with measuring influence, but many think tanks funded by the private sector or philanthropists tell me that they do not bother with measuring impact; they know they are influential and are satisfied with media presence and access to key networks. Why is this?

SM:  To take the first part of this question first, I’m actually surprised by how few ODI-like independent think tanks there are in many parts of the world, including in Europe. For example, the group of think tanks we work with on European development policy contains several that do not really think of themselves as think tanks in the same way as we do, and we are constrained in our expansion by not being able to find counterparts in areas like Scandinavia. Of course, as Jim McGann is always reminding us, there are thousands of think tanks around the world, though not all on his list are independent, and some are not really think tanks.

As to measurement of impact, well, we want to know, partly so we can learn –‘improving not proving’ is the name of the game in modern evaluation. In addition, however, we do need to ‘prove’, since future funding depends on being able to demonstrate value. As you know, the results agenda has become ubiquitous in development discourse. The main challenge is not to be captured by simplistic models of change, and to capture the full complexity of social, political and institutional change. I call this Results 2.0. The RAPID programme at ODI has done some great work on measuring impact in this new paradigm, for example using stories of change. We need far more, from all think tanks.

EM:  Are there any differences between ‘international development policy’ think tanks (ODI or IDS) and ‘domestic policy’ think tanks (IPPR or ResPublica)?

SM:  An obvious difference is our target audience: international policy-makers on the one hand, domestic on the other. That makes our life, as international think tanks, far more complicated. We have to target DFID, for example, but also the UN, the World Bank, the WTO, the UNFCCC, whatever. A starting point for think tanks is to ask ‘who is making what decision, when are they making it, and what product do you need, and when, to influence the decision?’ Usually, you can’t reach the international decision-makers on your own, so you necessarily need alliances –a model I have called, from an analogy with airline alliances, ‘policy code-sharing’. That also means a different way of working, especially with new think tanks in developing countries. On the other hand, no country now lives in isolation, and the global agenda drives domestic policy – look at the global financial crisis, or climate change, or the current food crisis for examples. I think that means the days of the purely inward-looking domestic think tank are numbered. Some realise it. Some do not.

EM:  Many think tanks in developing countries have developed a model that works: they do excellent research, produce high quality outputs (international journal quality) and provide advice directly to policymakers and donors through well established relationships of trust. However, they worry that this may not be good enough in the future. That, in the future, if they are not online, not in the media, not promoting public debates, their days as an influential and respected think tank may be numbered. Taking their research to the media, however, poses great risks and threatens their relationship with policymakers. How can they, in the words of a researcher in Indonesia, remain a trusted critical friend of the government and at the same time be accountable to the public?

SM:   We’ve already talked a bit about how to preserve trust, so let me pick up the point about new media. These days, we all have websites and most of us have blogs. Some, though not me, yet, communicate through Facebook or Twitter. There are two issues here. The first is about choices. If you spend your time writing blogs, the chances are you will produce fewer journal articles. In a research institute, that might be highly detrimental to your career, and to research funding for your Department. In a think tank, it might be the best way to reach your audience. Reward systems may need to be adjusted to recognise good communication, alongside, sometimes instead of, traditional publications. This is an issue for all development studies, actually, with the last research assessment exercise in the UK much preoccupied with how to value publications in indigenous languages or via new media. The other issue is speed. In the old days, someone would write a journal article or a book, and months later a review might appear, then, after a further interval, a reply. Now, think tanks work to the same rhythm as a newsroom, and need to plan accordingly. The Heritage Foundation is said to be first rate at this. I see think tanks in London who also set out to shape the evolving news agenda.

EM:  Research funders have become extremely interested in communicating the research they fund. But most efforts seem to be about communicating the findings of the research –the fact, the evidence- rather than developing complete and convincing arguments –that make appeals to values, morals, law, etc. . . . What makes, in your view, a good argument?

SM:  There is a literature on this. For example, I very much like Drew Western’s book, The Political Brain. He argues that we need to appeal to both the rational and the emotional side of the audience –which I take to mean facts and figures, combined with good stories. But also, we need to understand policy-makers’ perspectives, and try to help: as Mrs Thatcher used to say, ‘Don’t give me problems, give me solutions’. A case in point in the UK just now is helping Ministers to make the case for development cooperation, at a time when aid is protected from swingeing public expenditure cuts. That doesn’t mean lying, of course, but it does mean addressing the question of how to explain the long-term impact of aid. ODI has been doing work which explores detailed case studies of development impact.

One other point. I talked earlier about blogs and twitter feeds and the rest, but, actually, I see that really well-written books can be enormously influential. Paul Collier is a role-model in this respect: a serious researcher, author of many quantitative journal articles, but author also of books which Ministers read.

EM:   Thanks. We will pursue this conversation in a second round, looking particularly at some management issues, like the role of the Board, and managing change.

Never mind the gap: on how there is no gap between research and policy and on a new theory (Part 1 of 3)

[This is the first of three posts that I hope to publish over the next couple weeks. Together, they make up an essay I started to write while at ODI last year and that I have now decided to get back to. If you have any comments, do please send them my way -directly or commenting below. The key question I want to answer in this first post: Is there a gap between research and policy that needs to be bridged?]

The Research and Policy in Development (RAPID) programme has, for the last eight years or so, studied and promoted the role of research based evidence in policy processes. The underlying premise of the programme, and its many competitors and collaborators, has been the need to bridge the gap between research and policy. This, in turn, is based on the assumption that there is a gap that needs to be bridged.

It is this assumption that I want to challenge; and, by doing so I hope to reformulate the direction of those working in this field.

The space in the middle is full

In 2009 I edited a book focusing on the relationships between think tanks and political parties in Latin America. The cases studied in the book (Colombia, Ecuador, Peru, Bolivia and Chile) provided clear evidence that a number of networks, organisations and actors operate in the space between research and policymaking. In Peru, for example, technocratic networks were found to bring together researchers and policymakers focusing their attention on long and medium term policy discussions around particular issues. One of the most interesting cases is the Macroeconomic Network, which has been credited with promoting a stable macroeconomic policy in Peru for nearly two decades. This network, like others, emerged out of a systemic lack of trust (and reputation risk) between political parties and research centres that led to an almost absence of formal institutional relations between these two groups throughout the 1980s and 90s. The members of these networks (who come from academia, consultancies, the private sector, the public sector, political parties, etc.), however, are free to participate in policy discussions with their peers –across centres, across parties, an across ideological camps.

In Chile and Colombia, which benefit from more stable political systems, internal political party think tanks are were found to be more common. Like in the U.S. and in Europe, political parties in the more politically mature Latin American democracies have developed their own research capacities or have negotiated formal institutional relations with research centres. Internal think tanks are also found in Uruguay and Argentina.

Where formal internal think tanks do not exist (for different reasons, including legal ones), it is still possible to find stable yet informal relations: in Peru, the Instituto de Gobierno is closely linked to APRA, in Ecuador, ILDIS and the Democracia Cristiana party, in the United Kingdom, IPPR was closely linked to the Labour government, and in the U.S., the Heritage Foundation has obvious links to the Republican Party.

Globally there are also a many public think tanks: DIE is the German Government’s international development think tank; and in Vietnam and China, line ministries have their own internal research centres or think tanks. This group could also includes regulatory bodies, parliamentary commissions, scientific advisors, etc. Even the UN has its own; and some, like the UN’s Economic Commission for Latin America and the Caribbean (CEPAL) and UNICEF’s Innocenti Research Centre  are known for their high academic and policy relevant standards. The World Bank and DFID, too, have enviable research and policy teams; at least in terms of their size and budget. And when they cannot manage to do it in-house they work closely with think tanks, academics, and consultants to answer urgent and more important (although not as often as would be hoped) policy questions.

And this is before considering other far more important but often forgotten boundary workers (to borrow a concept from Robert Hoppe): unions, specialised and general media, professional associations, corporate research centres, consultancies, political parties themselves, public intellectuals, presidential or congressional commissions, consultation spaces, etc.

In other words, both at the national and the international levels, the space between research and policy is full: with academics, experts, policymakers and politicians who belong to informal and formal policy networks; with think tanks or research centres associated to policy actors; with researchers and research centres that operate within the policymaking boundaries of parties, governments and international organisations; and with a host of other organisations and individuals with equal right to participate in pursuit of their own private or public interests.

This is not just a one-off case or a coincidence

The cases from the Latin American study also showed that both communities are so tightly interconnected that it would be impossible, in some cases, to talk about one without the other. Furthermore, the relationships described above, that populate the space that has been assumed empty (and therefore in need of a bridge), are not new but part of a long shared history.

Think Tanks have often mistakenly been described in the literature as independent and apolitical. This is based on the legal terms used to define them in more developed countries. However, the work on political parties and think tanks in Latin America, and on the relationship between think tanks and politics in Asia, for example, has provided us with ample of evidence on the foundational and historical links that exist between the policy and research communities.

This relationship, of course, is not limited to the developing world. The United States, the Mecca of think tanks, provides one of the clearest histories of this historical co-evolution. According to Andrew Rich’s study of experts in the U.S. politics the original think tanks were set up in the early 1900s by philanthropists who believed in the role of science in progressive policymaking; later, think tanks set up after the Wall Street crash were founded by philanthropists and progressive policymakers interested in the promotion of policy solutions to prevent social unrest and respond to the crisis; with the Second World War and the Cold War underway, the art of government in a globalised world became more complex and the demand for more technical advice to deal with it led to the formation of foreign policy and defence think tanks founded by ideologically motivated donors; and later in the 20th Century (and in the early 21st Century) think tanks have been founded by political leaders and players with explicit connections to political parties and the pursuit of power.

In a recent book, The Argument, Matt Bai describes how the Republicans’, and then the Democrats’, strategies to win over the White House largely depended on the capacity of partisan think tanks to shape the discourse.

None of this is new to the developing world: the history of Colombia’s political system provides a clear illustration. Parties and think tanks share a common origin –politically driven intellectuals engaged in the struggle for the formation of the Republic in the mid 1800s. These intellectuals who came together around academic publications and newspapers later formed the basis of political parties and, as policy demands became more complex, also of the first policy research organisations. Throughout the 20th Century, Colombian politics have been linked to the formation and dissolution of think tanks led by party leaders and contenders.

The same is true of Chile, where the democratic government which took power after Pinochet in the 1990s was largely made up of the leaders and experts of the think tanks and other civil society organisations that had opposed the repression of the 1970s and 80s and organised the opposition’s programmatic platform. During a short period many found themselves without leadership and staff; and some had to close. On the right of the political spectrum, new organisations were set up to defend the old policies from the new government. According to Sergio Toro, in Chile, parties and think tanks have reached an explicit but informal agreement: parties do politics and think tanks focus on their programmes.

In another study, we have found the same intricate relation in East and Southeast Asia –albeit driven by different forces. The links between think tanks and the private sector, political and religious leaders, and strong single party states span the entire life of the centres. Their foundation, agendas, strategies and finances are closely linked to the purpose for which they were set up: mainly to promote economic growth, regional integration and national security.  Across the region the main differences relate to the identify of their financial and ideological masters: think tanks in Japan and South Korea are closely linked to the private sector; in China and Vietnam to the State; and in Indonesia and Malaysia to their national or regional leaders.

Most of them, however, appear to fulfil a common function: to legitimise the prevailing developmental state narrative through their research. This is also the function that internal and associated think tanks in the U.S., Colombia and Chile fulfil.

However, think tanks carry out other functions, too. Orazio Bellettini and Melania Carrión, in a study of think tanks and political parties in Ecuador, explained that besides promoting evidence based policies and legitimising them, they also provide spaces for reflection and policy debate, develop the capacity of future cadres of policymakers and politicians, and, on occasions, channel funds into political parties or actors. Other authors have included auditing and educational functions. These functions overlap with those of political parties –and in many cases, with those of policy bodies within the formal state apparatus.

This overlap stems from the fact that they share a history together, that over the years they have been affected by and reacted to the same forces, and that the relation that we observe today is only a reflection of their current relative position to other political actors at the international, regional and national political contexts.

As in any history the relationship between think tanks and their environment, and therefore, the nature of the crowded middle, is in constant flux. For instance, New independent think tanks in China patronised by returning economists, business leader, high level political figures have emerged in the last decade. In Africa, funded by foreign donors, new research centres, NGOs and programmes have also appeared in the last decade to fill specific sectors with information, experience and advocacy.

Not everyone is well-connected

I hope to have been able to paint a picture of complex network or system of relations between research and policy actors that can easily expand enough to reach the most hard-core white-coat scientists on one extreme and the spin-doctors of the party-political apparatus on the other. (I could have possibly talked more about the roles of universities and private sector organisations but I did not want to extend this post too much). If these (formal and informal) relationships did not exist, political decision makers would be unable to respond to scientific advances –and we know that this is not the case: legislation on telecommunications, health policy, animal health decisions, etc. are all examples of areas where independent (or at dependent of the interests of others) scientific knowledge has influenced decisions at a fairly quick pace (for good or for bad).

However, although the space between research and policy is crowded with players and relationships between them, not all researchers and policymakers are equally connected the other members of the system. And this is one of the reasons why the impression of a gap remains so strong still.

One of the RAPID programme’s most in-demand services is to support research centres and NGOs to develop policy influencing strategies. If there is one thing that can be said about most of the researchers and aid workers that we work with is that few of them (or hardly anyone) are sufficiently well-connected as to find our advice entirely useless. Some outliers who pass through our preparatory filters (or who are there as part of a broader programme and have little choice –and to whom I apologise- or because they are interested to find out more about this) but there is also  self-selection process that precedes our interventions. Researchers who belong to the technocratic networks and internal think tanks that I mentioned before, by and large, do not need our support. And others, at the top of their academic game, are probably already aware that their reputation alone can be far more effective than all the strategic planning we may be able to help them with. They may enjoy our support but it is unlikely to be game-changing.

One thing that these well-connected researchers know is that policymakers, and their advisors, do not just talk to anyone or read anything –regardless of how appealing the briefing paper cover might look or how savvy the internet strategy may be. They rely on their own networks to access and interpret information. This might not be always the case in more developed civil services where there are a number of formal mechanisms to facilitate this; but in many developing countries, often only informal channels are available –and trusted.

In any case, when prompted, everyone, regardless of their connections, can tell a story about how research has influenced policy; as well as how they or their peers, as researchers, have, at least once, influenced policymakers.  So this idea that there is an insurmountable no-mans-land that has to be bridge with new approaches and tools quickly begins to crumble.

The fact that some are not as well-connected as others should not be seen as necessarily their fault. Competent researchers are not by definition competent networkers or communicators. For a number of reasons (including the fact that decision makers do not have unlimited time to network and talk to every researcher interested in their work), not everyone can have access to the policy networks that matter.

Just as the relations between think tanks and political parties have evolved over time, so have the relations between researchers and policymakers –and between research and policy. In Zambia, where until recently, there was only one university, it is not surprising to find that most economists working in the government, civil society and the private sector know each other. And they have known each other for decades –they have learned everything they know together.

And the stories of researchers moving into politics and back are not unique to the US –in Latin America, Africa and Asia they are also quite common.

In conclusion, the middle ground, the space between research and policy, is full. Furthermore, this is not a snapshot or a coincidence but part of a historical co-evolution between policy and knowledge actors: one community does not exist (and in some cases, cannot exist) in isolation to the other. Finally, there are countless examples that show that the two main actors of this story, researchers and policymakers, are already linked.

The image that appears then is one of a system where some actors are better connected –either directly or through their personal or professional affiliations to organisations, networks and processes- than others. The better connected actors have, as in all networks, higher chances of making new and higher value connections and, as a consequence, command better knowledge of the system and how to use it. In this process, the least connected ones will feel increasingly isolated and have partial views of the whole system; and  this will only reinforce the perception of a  ‘gap’ and the need to build a bridge to where decisions are being made –in what seems like some far away land.

Therefore, rather than building a bridge, I would argue that we should learn to navigate through the system. What we need are maps.

Three considerations should guide us henceforth (to be addressed in future posts): a focus on research and policy rather than researchers and policymakers; the nature of the research and policy processes themselves, and the relations between them; and role that information density plays in facilitating the role of knowledge in policy.

… next week: 2 of 3

Impact evaluations, research, analysis… what is the difference?

When it comes to policy influence, what is unique about impact evaluations in relation to other types of research? Let me explain why I am asking this question. When I was in RAPID (and still) I were asked to help organisations to develop policy influencing strategies. Some times, this help came in the form of a workshop, but other times it was provided over a longer period of time though mentoring and support. Almost every time, the clients would ask for lessons tailored to their own contexts -which ranged from the politics of international donors to local NGOs, or working globally or regionally or nationally, etc. This often meant that they wanted case studies from their region (e.g. Africa) or the sector they were working in (e.g. health).

Now, RAPID does not tend to advice HOW to influence but HOW TO DECIDE how to influence -there is a difference (although the communications team does help with some more practical aspects of this). So we have always expected that the context will be provided by the organisation that we are working with; and that decisions about what specific influencing approaches to follow will be also theirs. This might sound like a cop-out but in fact it is an honest approach: we cannot possibly claim to be experts on every context and sector (we work with organisations all over the world that in turn work in a range of sectors). And in any case, we had to assume that those we worked with knew their context enough -this, we found out, was a fairly naive assumption in some cases.

So to deal with this demand we tried to provide support in a way that would allow the client to present, up front, as much contextual and content knowledge as possible. And to do this, we provided some tools (but this is another matter).

Although the planning process proposed is applicable to all sectors and contexts (except that it may not be possible or necessary to follow all steps or be as detailed in all situations) I accept that influencing in Africa (and in each country) is different than in Latin America -and in health policy it is likely to be different than in education policy; and so on. But it is also different to influence as a research centre as it is to influence as an NGO; and so on. So focusing on context and content issues may be in fact misleading.

Recently, however, we have been asked to tailor-make our planning approach (the RAPID Outcome Mapping Approach) and recommendations on HOW to influence to impact evaluations. Behind this demand is the assumption that policy influencing based on the findings of impact evaluations is different from policy influencing based on the findings of other types of research.

So how different is it to influence from one type of research than from another?

My view is that this question is not relevant -certainly not useful. I will provide my reasons below but let me also ask for your input. If you can demonstrate (or argue, because I am not demonstrating anything) the opposite, please do so; this is an open debate.

To start the debate, let me provide four reasons for my view:

  • Argument not evidence: I have already used this before in this blog but I think it is still a relatively new idea in the research-policy linkages community. Policy (or programme or project -or more broadly, behaviour) does not change because of a single piece of evidence. Change happens because new (or improved) arguments are convincing enough to affect someone’s beliefs, assumptions, premises and actions. These arguments are made up of a number of elements, for instance: evidence (from different sources), appeals (to ideology, values, rights, laws, interests, etc.) and imaginary (metaphors, stories, etc). These elements are put together into an argument. And so, even if the findings of impact evaluations are used, this is unlikely to be the only type of evidence and it is not possible to separate it from the argument as a whole.
  • Credibility is in the eye of the beholder (or ‘any evidence is just evidence’): There is a view that impact evaluations are different from other types of research -that they are the gold standard of evidence. The scientific rigour involved in an impact evaluation, its proponents argue, set it apart from all other methods. This may be true. Impact evaluations may be more reliable than other methods, but when it comes to influencing this only matters if (and only if) the person or people being influenced agree. And if they do, then, if anything, influencing will be easier and therefore there is even less of a need to focus on differences or come up with lots more specific examples.
  • There are few full-time impact evaluators -and impact evaluation centres: While some people and organisations may be specialising on impact evaluations most researchers do a bit of everything. Impact evaluations are just one more type of research they have to carry out on a normal year. And the same is true for the organisations that they work for. As a consequence they do not just communicate impact evaluation findings. Therefore, the idea that they would have or be able to specialise on one particular type of influencing (based on the source of the evidence) does not seem to make much sense.

So, not only are impact evaluation findings tangled up with the findings of other types of evidence and other non-evidence components of a good argument, they are also, whatever their scientific rigour, not necessarily seen as any different (or better) from other types of evidence by those being influenced (although so do). And to top it off, those attempting to influence are not necessarily impact evaluation specialists and therefore cannot possibly develop impact evaluation only based strategies and ‘other sources of evidence’ based strategies to implement separately.

The fourth reason is more fundamental:

  • The hundreds if not thousands of cases gathered by the literature have given us a great deal of lessons (common sense, really) that are relevant to all cases. A lesson does not imply that one should necessarily behave in a particular way, though. For instance, a lesson may be that working with the media can help to open up the debate  -but in many cases opening up the debate may not be desirable. This does not negate the lesson but in this particular case it is just not applicable. The usefulness of impact evaluation specific lessons may be in the actions that they suggest but in helping to communicate with impact evaluators and to convince them of the importance of planning for influence. In other words, the lessons from impact evaluation cases may be used as part of an argument employed on the researchers themselves. But whether they will be useful (more useful than lessons from non-impact evaluation based cases) or not is not relevant. 
What do you think?
  • Is there anything about impact evaluation findings that make influencing strategies (and actions) different from those where impact evaluations have not been used?
  • Is it useful to talk about impact evaluation based influence and non-impact evaluation based influence?
  • Is it worth the effort? Can we not learn from any case?

What kind of policy entrepreneur are you?

When I joined RAPID back in 2004, Simon Maxwell, then Director of ODI, had developed a questionnaire to decide the type of policy entrepreneur that one was -or more accurately, the type of approach that one felt more comfortable with.

Last night, looking through my files I found a spreadsheet I had put together for workshops and decided to upload it to Google Doc.

Here is the link to the Policy Entrepreneurship Questionnaire. Give it a go. All you need to do is follow the link and add some columns as if you were using excel  (if there are none left) and complete it (you might have to drag the formulae to the new column -but I’ll try to keep up with it if there is demand). You should get an answer right at the end (don’t peek).

Hopefully we can use the data generated. Please forward it to your contacts and within your organisations.

Let me know if you have any problems.

3ie, impact evaluations and policy influence

Caroline Cassidy, Communication Officer for the RAPID programme, has just published a post titles: what does influence mean for impact evaluations? on 3ie’s Mind the Gap conference blog. In it she mentions a very interesting presentation by Paul Gertler who provided a refreshing and very nuance view of what influence actually means:

Not just policy change (or programme change) but also the acceptance of new ideas, the incorporation of new evidence to the political debate, the development of new skills, etc. It was a shame that this came late in a research communications workshop we had been running. It would have made a great introduction to it.

For more information on the 3ie event you can visit the Mind the Gap conference website. There will be very interesting tools and presentations for think tanks -although I would hope that there is also some space to debate the real value and relevance of impact evaluations vis a vis other sources of evidence. I’ll get back to you on this.

 

Follow

Get every new post delivered to your Inbox.

Join 5,214 other followers