What is the future of think tank communications? IPPR, ODI, the Social Market Foundation and the Economist come together at an event in London to try to address this. The report of the event in itself is a perfect example of what can be done with very little effort -but careful planning.
Posts tagged ‘ODI’
This short post is in part an attempt to offer such a (wish) list of the kind of key skills and competencies that I think that a think tank director should bring to the position. And of course, this post is not just for or about ODI but is intended to inform other think tanks which may be going through a similar process. I am just using ODI as an example.
The ODI digital strategy, first outlined in a series of blogs for onthinktanks.org, was awarded Online Strategy of the Year 2012 at the prestigious Digital Communications Awards, held in Berlin on Friday. ODI beat off competition from multinational corporations and specialist digital agencies to claim this major award. This post is based on the speech give to the jury and explains very succinctly what the strategy is and where/why it has worked.
[Editor's note: Vanesa Weyrauch is the Principal Researcher of the Influence, Monitoring and Evaluation Programme at CIPPEC, which she created and has led since 2006. She founded a network of leading think tanks in Latin America, with the support of GDNet, Directores Ejecutivos de America Latina (DEAL). This post is a response to Developing research communication capacity: lessons from recent experiences and can be read alongside Caroline Cassidy's own post]
I am not an expert on capacity development per se but I’ve been a practitioner from CIPPEC of a combination of activities during the last years in Latin America which have helped me reflect and learn on what seems more promising in terms of helping others improve the way they try to influence policy through research. Much of this work has been performed under EBPDN LA with ODI and most of it under the program “Spaces for engagement” with GDNet, a strategic partner for us in this field.
This is why Enrique’s and Martine’s review findings from a recent communications project that RAPID and INASP worked on for IDRC have highly raised my attention. Just in time, I thought. After a couple of years of trying out several ways of improving what we know about policy influence (combining formal training workshops with online courses, technical assistance to think tanks, design of an innovative M&E methodology focused on policy influence, etc.) we have decided at CIPPEC to develop a capacity building strategy for 2013-2018 that allows us to be more strategic in the way we use our limited resources to assist policy research organisations and its leaders to enhance policy impact.
Firstly, I believe that some responses to questions posed by Enrique in his previous post and some of his recommendations may vary according to each type of individual/organisation taking part of the initiative (they tend to be very heterogeneous, which on the one hand enriches the exercise but on the other hand makes it extremely hard to please all participants). At CIPPEC we had experiences in training networks on these issues and even though they might share beliefs, values, research projects, etc, each network member had very different capacities and interests in research communications revealed in a pre-diagnosis. So how do we deal with this when resources are scarce? Ideally we would have all the time and resources to work both in groups and individually to support real change processes with viable plans and enable cross-learning but this is not the case in most of the cases. Therefore we face the challenge to make the most of what is available; smart decisions that use the evidence shared by Enrique and our own experience are crucial then.
Another key and related decision is whether those offering the support aim at training individuals and/or organisations. Strategies to do so will differ significantly and it is extremely difficult to make these decisions at the very beginning of a capacity building project and when there is a diverse group that will take part of it.
Finally, another tricky but very profound question is: How do we monitor and evaluate these efforts? How do we know if and how we have contributed to developing this sort of capacities? I agree that stand-alone workshops are not then most desirable strategy but I’ve heard/seen persons and organisations making a big change after attending one where excellent trainers have been able to raise their awareness on these issues and spearheaded the right questions at the individual/organisational levels. Thus, what are we aiming at and how we will know if we’ve done good?
An excellent paper that has significantly influenced how I think about all these issues and how we plan to further develop our capacity to build capacity at CIPPEC is “Learning purposefully in capacity development. Why, what and when to measure?” by Peter Taylor and Alfredo Ortiz. We need to develop new thinking about these issues and this paper triggers this type of thinking for all of us: donors, “trainers”, “trainees”. As we titled one of our handbooks, I believe we are all a little bit of learners, teachers and practitioners. That’s why ways to generate and share knowledge are increasingly horizontal! For us, online courses have enlarged the opportunity to make this happen as the knowledge is shared and discusses between peers and colleagues. What participants of our courses ask, the reflections they make, the real live examples they share, have all largely enhanced the knowledge we share in the next edition of the same course.
Finally, I am more and more convinced of the value of constant cross-fertilisation between theory and practice (and that’s what we’ve tried to do in our work all these years) which will significantly affect what we consider valuable knowledge and how we share it. Sir Ken Robinson has very effectively conveyed the importance of rethinking education: there is a real and increasing call for creativity in the way we treat knowledge. For this, group learning is key, collaboration is the strategy for defining crucial questions and co-building the answers. Spaces -like this blog- where we can share what we know and don’t know about the topics that we are passionate about are a promising sign of how capacity of all-teachers, learners and practitioners (which are changing roles for all individuals) can be further developed.
[Editor's note: If you'd like to join the conversation with a post of your own, please send us an email or tweet. See the original post for inspiration: Developing research communication capacity: lessons from recent experiences]
Research communications support: why do donors, think tanks and consultants keep making the same mistakes?
[Editor's note: Caroline Cassidy is the Research and Policy in Development (RAPID) Programme's Communication Officer. This post is a response to: Developing research communication capacity: lessons from recent experiences and can be read alongside Vanesa Weyrauch's own response -coming up this Wednesday]
Building capacity to develop research communications skills and competencies for policy influence is not a new thing. There are a multitude of players involved in the process who have been working in this area for years. And evaluating that capacity development is not really a new thing either. So why then should I be writing this blog if what I am about to say is nothing new? Because, despite clear recommendations for better support, time and time again, donors, think tanks and consultants keep coming up against the same challenges, leaving research communication to the end of the project, then getting caught up in a cycle of workshops and interventions that are unlikely to have the desired impact, and when researchers or teams are already looking to their next area of work.
I arrive at this type of capacity development from ODI’s Research and Policy in Development (RAPID) programme where I have been working with the team to build on ODI’s years of work helping to develop the capacity of researchers and organisations in a variety of contexts, to have impact in the policy realm. Enrique and Martine’s evaluation findings from a recent communications project that RAPID and INASP worked on for IDRC last year identify some very interesting (though sadly not all new) issues that frequently surface when we do this type of work: contextual concerns – in a short space of time, can a consultant really get to the crux of the project without having a strong working knowledge of the context themself; support often comes at the end of a project so that therefore it feels like it is ‘tagged on’ as an extra dimension, rather than an integral one; and ensuring you have the right people in a team involved in the first place, who can benefit the most from the support.
One recommendation from Enrique and Martine that I don’t think we at ODI have seen before is assessing demand and talking directly to the grantees who need support before a contract is even signed, then deciding whether this capacity support should be provided and to whom. This is also related to another report lesson on researcher incentives and pressures beyond communications and the fact that many do not believe it is their role to engage at all – that it is someone else’s job. Therefore, assessing the demand and finding the right people within the organisation to work with as early as possible is absolutely critical, (and then re-evaluating this throughout the duration of the support, as circumstances alter). And if it looks as if it’s not going to have the necessary impact – consultants and think tanks should have the ability to just say no from the outset.
Yet, despite these and other well-established, clear and very sensible principles, there seem to be a few key confounding factors that often impede their implementation:
The first is funding; although there is a growing consensus of the importance of communicating research, funding for communication has undoubtedly suffered at the hands of the economic downturn and the growing ‘value for money’ agenda. It is not always seen as a major priority in the research cycle and often too closely, and even wrongly, associated with branding and marketing, rather than policy influence. Moreover, even in the communication arena donors often favour interventions that lead directly to visible outputs like, the workshop.
Secondly, as Enrique and Martine emphasise, there is often poor planning: donors and organisations realise quite late into a project and budget cycle that the teams need extra support in this area, but with not much time and little funding, a ‘quick’ workshop is often seen as an immediate ‘magic wand’. As a blog by my colleague, Ajoy Datta highlights – workshops do give a good introduction to the topic and some initial support, but are unlikely to make a real impact once the participants have left the building.
I also think that there is still the misconception, at some levels, that researchers and teams shouldn’t be thinking about the communication of their work until later in the process or indeed towards the end. However, whoever leads on communications needs to engage with stakeholders as early as possible to ensure relationships are cemented and that ideally decision-makers have buy in.
And finally, well even if they could do all of the above, donors frequently do not have sufficiently flexible mechanisms and incentives to support a more appropriate response, as discussed in a recent ODI background note: Promoting evidence-based decision-making in development agencies.
So faced with all this doom and gloom, what can be done? While workshops can still be useful, in RAPID, we are now trying to incorporate them where possible, as part of a wider and longer involvement in a project, and one where ideally we are involved from the beginning. For example, we are currently working on a two year project with the International Initiative for Impact Evaluation (3ie) on communications support to their grantees, knowledge management development (at an organisational level) and another three year project on monitoring and evaluating grant policy influence. The latter is in consortium with three other regional organisations: CEPA (Sri Lanka), CIPPEC (Argentina) CommsConsult (Zimbabwe). It is an exciting, though we recognise, rare opportunity to work at different organisational levels to do some thinking, develop tools, research and capacity work in a ‘quality learning laboratory’. Support will be provided by locally based teams working in context, prioritising face-to-face engagement (which does include workshops!), but also using online engagement where necessary. All of this will hopefully help to ensure better impact, longevity and buy in through stronger, more collaborative relationships between researchers and policy-makers, and from our side, better contextual knowledge.
And for other projects, where we are working with smaller organisations and donor budgets, we are trying to ensure that there is additional support around the workshops through mentoring, field trips, local partners and we will certainly take on board the recommendations put forward by Enrique and Martine. And sharing evaluation findings in early discussions with donors can make a big difference. An organisation I am working with decided to implement more face-to-face support, because the donor read and assimilated the recommendations from another project evaluation report.
Communications capacity development is a constant learning process and there is no best-case, winning magic formula. But nor should there be – because good support is so dependent on the organisation, project, participants and the context, and just ‘shoehorning’ a ready-made approach or template is not going to work. This report contains some useful principles to guide new forms of support and to encourage donors, think tanks and consultants alike to not fall into the same traps of short-term support that frequently only deliver mediocre results. And above all, interventions are far more likely to become embedded into the life of a project (and hopefully beyond) if they are part of the project from the beginning and not left as an afterthought.
[Editor's note: Vanesa Weyrauch's response will come out on Wednesday but if you'd like to join the conversation with a post of your own, please send us an email or tweet. See the original post for inspiration: Developing research communication capacity: lessons from recent experiences]
(Please note that in this post I am referring to policy research initiatives or programmes: initiatives that have explicit policy influencing objectives.)
The RAPID Outcome Mapping Approach is a methodology that I helped develop while working for the RAPID Programme at the Overseas Development Institute. I think we made a mistake (well, more than one, but let me focus on this one today). RAPID has always been in high demand when it comes to helping policy research organisations and programmes to plan, monitor and evaluate policy research influencing strategies. It is (and I still am) called to this help after the overall policy research programme has been designed: the objectives (and logframes) have been decided and the contracts has been signed with the funder.
Our mistake was to pitch it this way. We accepted (or did not care to challenge) the idea that there were separate components: research, capacity building, ….., and policy influencing (which focused mainly on communications), and that it was the latter that ROMA could help with. We let the researchers deal with the research component and took it as a given. There is a reason for this. Historically, RAPID has been seen within ODI as non-research-based (even though its work is quite solidly based on a great deal of research) and so we chose to focus most of our attention away from discussions related to the panning of the research component. We assumed (and it could still work) that this was a safe way in. Unfortunately, researchers, under pressure from donors to focus more and more on communications, still protect the research component and shield it from approaches such as ROMA. It is my impression that they are willing to talk about policy influence, research uptake, communications, etc. that as long as the research component is not affected.
But this is their mistake. ROMA is not that useful when it is brought in after the research component and the programme’s objectives have been decided. ROMA (and other similar approaches) is much more useful when it is used to plan the entire programme: including the research component of a policy research programme.
ROMA is about critical thinking. That is all it is. It can be used in any situation (big or small) and circumstance because it facilitates a process of reflection about our context, organisations, skills, objectives, partners, audiences, tactics, tools, how to use them, why, etc. It helps us to explain why we are doing what we do -and check and re-check if it is the right thing to do as more information becomes available. Users go through a narrative that helps them to identify and define objectives, think about the policy (broad and narrow) context that affect them, identify the main players in this context and those that the programme may want to target, determine more specific objectives for each, consider various ways of achieving them, developing and choosing the most appropriate approaches, tactics and tools, etc.
Among these approaches, tactics and tools are the usual: media campaigns, training and education, digital communications, networking, …, and research. Yes, research. In a policy research programme, research (analysis, literature reviews, case studies, systematic reviews, impact evaluations, randomised control trials, clinical trials, etc.) is a component of the overall programme; just like communications, capacity building, networking, etc, are components of the programme, too. Hence new research, like some of the activities of the other components, is not indispensable. It very well be that it could be possible to affect policy by focusing on using existing research and just promoting a public debate on a policy issue; or by improving the capacity of governments to make more informed decisions; or creating formal links between policymakers and experts; etc.
Similarly, it very well be that new research is absolutely necessary. In these cases, however, the research design cannot happen in isolation of policy influencing considerations. What kind of research is the most appropriate? ROMA can help decide what kind of research might be more relevant or useful to achieve the programme’s objectives. What questions should it answer? ROMA can help decide what questions need to be answered to develop the arguments that may influence the programme’s audiences. Should it be done collaboratively? ROMA can help decide. Who should we collaborate with? ROMA can help. What should be the outputs (products) of these research projects? ROMA can help. It can even help us decide who should be the researchers. I recall a case when a minister told me that the government had no problem with the research methods and conclusions but could not really use findings from the researcher who had carried it out. In another case, ROMA helped to avoid this situation. I say ROMA but of course I mean ‘a planning methodology like ROMA.’
The problem is that all these questions are currently decided before a discussion about the context, audiences, policy objectives, and the other components of the programme is had. Even the proposal writing process (and I have participated in many of these) is compartmentalised and often separates research from communications from capacity building from M&E. Each section tends to be drafted separately and then put together a few days before the deadline, the logframes are prepared at the last-minute, and all is then submitted to the donor. And all this is done before any real analysis of the policy context has been undertaken. I know this because whenever we come in to help with policy influencing the first thing we do is ask about this; and the answer is often the same: no. But by then it is too late.
Here is what I propose:
- Before developing a strategy the donor or the organisations bidding for the policy research programme should carry out a ‘baseline’ study of the policy they intend to affect. This could be a political economy analysis of the policy process, or a study of the discourses that shape it. It should identify the various players involved, their interests, objectives, their use of evidence (or not), networks, etc. AusAid has recently conducted a series of diagnostics of the knowledge sector in Indonesia that could serve as an example. The kind of studies that Emma Broadbent has carried out on policy debates is also relevant.
- This should help to clarify the policy objectives for the entire programme; they will be based on a realistic assessment of the context. Everyone this days seems to be talking about Theories of Change -but few base them on sound theories of how change actually happens.
- In turn, these should help to consider which players the programme is proposing to focus its attention on and how is it that it could influence them or contribute towards changing their policy behaviours. Contribution here is the key word.
- This focus should also help to decide what may be the most appropriate approaches, tactics, and tools for the programme to employ. And this will include, possibly, a research component. Depending on the audiences and objectives this may be very theoretical, a bit more practical, quantitative, qualitative, participatory, etc.
- The research component, when designed at this stage and not before, will benefit from having a baseline that explains what, how, and why research is used, a clear audience among the key policy players, clear objectives, and a good sense of what are the other approaches (tactics and tools) which will be able to support and use research. This research will be inevitably better linked to the whole programme and not an isolated component developed before anyone bothered to think about the context.
- Once the strategy is developed, and only then, the right team can be assembled. Today, bids are put together after the programme ‘partners’ and staff have been identified. The right order, however, is to find the right organisations and people for the job. It should not matter if they are in someone else’s team. Imagine if you hired someone and then checked to see what they could and could not do. This is the same thing that happens now.
- Finally, and key to all of this, the programme strategy should accept that this process needs to be repeated over and over again. As the programme is implemented new information will become available, new challenges will appear, new opportunities will unravel, etc. Therefore, new approaches may be more appropriate, new partners and staff may be needed, and old ones may have to be let go.
This is not advertising for ROMA. I do not really mind what planning approach is used. What I am arguing is that for policy research initiatives, planning research and planning policy influence should not separated.