Shebo Nalishebo, at ZIPAR in Zambia, talks about how a research project achieved policy influence and draws out some potential lessons for think tankers around the world.
Posts tagged ‘policy influence’
In this post, Orazio Bellettini and Adriana Arellano, from Grupo FARO, outline the think tank's new approach to research and policy influence. This post has been partly inspired by Lawrence MacDonald's and Todd Moss' essay on CGD's approach to policy influence. What do you think? Do they have the right approach?
The marketplace of ideas is a powerful metaphor. It promotes the illusion that is a market (with buyers, sellers, and intermediaries) and that what is being traded are ideas (that, in the case of think tanks come from research). But is this so?
I’ve spent the last couple of weeks in The Balkans, particularly in Belgrade working with a think tank there (more on this later) and a few new ideas have emerged out of my conversations with the staff, its funders, comparators, policy audiences, and others in their policy community. One of these ideas is that think tanks compete (and collaborate) with each other (and other organisations) on a number of fronts -and not just on ideas.
Think tanks can compete on ideas. Unfortunately, this is not often the case. Given think tank funding approaches few think tanks in developing countries have the freedom to develop their own agendas and are instead limited by what their funders want to know. Funders in turn allocate their funds to avoid duplication (this think tank to study public health, that one to study economic policy). This means that few policy issues are debated by think tanks promoting different ideas and solutions.
There are a number of new institutional funders out there (the Think Tank Initiative and the Think Tank Fund are two of them) but their support is never really sufficient to give the centres the upper hand in the development and pursuit of ideas. Their research is still conditioned by the myriad of contracts they sign with other funders, northern think tanks or NGOs, and consultancies.
A good indicator of whether a think tank works as a consultancy (studying only what their funders ask them to) is that they will usually demand more coordination between funders to avoid duplication. Research driven think tanks (independent research) are usually happy with the idea of a public debate, for which duplication is necessary.
Think tanks also compete for funds. Donor policies that have reduced funding for research in Latin America, Asia, and The Balkans, for example, have led to more and more organisations (calling themselves think tanks) competing for access to fewer resources. This, combined with donor’s approach to funding described above, leads to specialisation and a focus on very narrowly defined projects which reduce the number of policy processes the think tanks can influence.
Think tanks also compete for people. In Sub-Saharan Africa, more than in any other place, I have been astonished by the salaries paid to some researchers: upwards of US$100,000 for mid-level (and not necessarily top quality) researchers would not surprise me. These are salaries nominally higher than those paid in the United Kingdom and the United States where think tanks are more readily available. In fact, think tanks in the UK would not be expected to pay much and, since they are instead seen by young bright graduates as stepping stones into politics, policymaking, and international organisations, salaries are relatively low.
A better comparison, however, is with think tanks in other developing countries. Southeast Asian, Latin American, and Balkan researchers salaries are significantly lower -at least based on what I have seen. In Argentina US$20,000 will ‘buy’ you a top researcher. The pattern that emerges from this anecdotal evidence is quite logical: where there are more qualified researchers, salaries are lower than where the supply (stock and flow) of qualified researchers is low. To attract good researchers, many African think tanks need to pay salaries that compete with the private sector, international organisations, and the government.
In these circumstances, more funding for think tanks normally leads to a competition for the few ‘free’ researchers available or to poaching them from other sectors (often the public sector –which is in it self a problem).
Think tanks also compete for access to policy spaces or processes. Decisions, despite all the talk about complexity and the corresponding jargon, are not made in the cloud. Being present at a key meeting at the ministry of finance matters. Having breakfast with a minister or his or her advisors matters. But not everyone can get a seat at the table or share ideas over coffee. Access is restricted and conditioned.
Some spaces are public and therefore more think tanks can participate (e.g. open consultation processes) but others are limited by ideological or technocratic conditions. In many cases the most ‘academically’ perceived think tanks will be given access to technocratic discussions -and will be trusted as impartial sources of advice; but no more. In other, ideologically identifiable centres will have an advantage and may even be given the lead in developing legislation and policies.
Often, access depends more on individuals within the think tanks than on the think tanks themselves –personal networks matter everywhere (developed or developing countries). And so the competition for people is crucial. Funding also provides access: a donor usually has the power to open doors for their grantees –figuratively and literally. So competition for the right funding is also important when it comes to access.
In addition to all of this, think tanks do not just compete with each other. More often than not they compete, depending on their business models, with NGOs, academic research departments, and consultancies (both in their countries and abroad). For example, ODI in the UK competes with think tanks in Latin America, Africa, and Asia because they bid for research projects that could very well be carried out by local research organisations. But ODI has access to DFID (and other northern donors) that local think tanks cannot achieve, it can hire researchers with more ease than local think tanks, and can build up its ‘organisational competence’ by referring to projects carried out by programmes and researchers across the organisation regardless of whether the proposed team has the experience or not. And since donors like to work across countries and regions, a single local think tank in Sri Lanka or Ecuador or Rwanda will find it difficult to compete.
NGOs have also began to develop their research capacity (or at least the perception that they have one) use this to demand access to more technocratic spaces –often reserved for academics and think tanks. Consultancies, too, taking advantage of their higher capacity to win large programmes are developing ‘think tank’ style initiatives that directly compete with think tanks –of then in their own policy communities. Take for example the Climate and Development Knowledge Network (CDKN) that is slowly starting to publish and disseminate policy research outputs at the national level branded as CDKN products. What chance does a small sustainable development (or related) think tank stand against the combine might of PWC, ODI, and others?
Finally, there is competition on the label: Many organisations that would have never before called themselves think tanks are beginning to do so in response to the increasing interest of donors on this particular type of organisation. Campaigning NGOs and networks are talking about setting up ‘research units’ and service delivery NGOs and consultancies are claiming to be think tanks because they are learning and sharing their lessons with others, to mention two cases.
One effect of this competition is that the concept of think tanks gets muddled-up and this can lead to a loss of credibility for those who do deserve the label. Another effect is that civil society (and society as a whole) may lose other types of organisations that are valuable in their own right and whose existence and strength in fact support think tanks.
- Avoid simple metaphors like ‘marketplace of ideas’.
- Funders need to pay more attention to how their funding strategies affect think tanks and their communities (including all these other actors) at the global, regional, and national levels. Demanding that more money is spent ‘in country’ is not responsible policymaking; and neither is it to channel funds via large northern think tanks or corporations.
- Funders need to fund the development of new generations of potential researchers by investing (and leveraging public and private funds) in universities. Workshops are not enough to learn how to be a good researcher -this has to be learned from early on.
- Funders should also attempt to leverage domestic funds to reduce dependence on foreign funds but also make research funding more local -and avoid, in that way, overfunding and dependence on northern centres, NGOs, and consultancies.
- Think tank funders have to publish their own definitions of think tanks. This has to be flexible (assume that it may change over time) and local (a think tank in Peru does not have to be the same as a think tank in Sri Lanka). Other types of civil society organisations should also be supported and not encouraged (even if unintentionally) to ‘become’ think tanks.
(Please note that in this post I am referring to policy research initiatives or programmes: initiatives that have explicit policy influencing objectives.)
The RAPID Outcome Mapping Approach is a methodology that I helped develop while working for the RAPID Programme at the Overseas Development Institute. I think we made a mistake (well, more than one, but let me focus on this one today). RAPID has always been in high demand when it comes to helping policy research organisations and programmes to plan, monitor and evaluate policy research influencing strategies. It is (and I still am) called to this help after the overall policy research programme has been designed: the objectives (and logframes) have been decided and the contracts has been signed with the funder.
Our mistake was to pitch it this way. We accepted (or did not care to challenge) the idea that there were separate components: research, capacity building, ….., and policy influencing (which focused mainly on communications), and that it was the latter that ROMA could help with. We let the researchers deal with the research component and took it as a given. There is a reason for this. Historically, RAPID has been seen within ODI as non-research-based (even though its work is quite solidly based on a great deal of research) and so we chose to focus most of our attention away from discussions related to the panning of the research component. We assumed (and it could still work) that this was a safe way in. Unfortunately, researchers, under pressure from donors to focus more and more on communications, still protect the research component and shield it from approaches such as ROMA. It is my impression that they are willing to talk about policy influence, research uptake, communications, etc. that as long as the research component is not affected.
But this is their mistake. ROMA is not that useful when it is brought in after the research component and the programme’s objectives have been decided. ROMA (and other similar approaches) is much more useful when it is used to plan the entire programme: including the research component of a policy research programme.
ROMA is about critical thinking. That is all it is. It can be used in any situation (big or small) and circumstance because it facilitates a process of reflection about our context, organisations, skills, objectives, partners, audiences, tactics, tools, how to use them, why, etc. It helps us to explain why we are doing what we do -and check and re-check if it is the right thing to do as more information becomes available. Users go through a narrative that helps them to identify and define objectives, think about the policy (broad and narrow) context that affect them, identify the main players in this context and those that the programme may want to target, determine more specific objectives for each, consider various ways of achieving them, developing and choosing the most appropriate approaches, tactics and tools, etc.
Among these approaches, tactics and tools are the usual: media campaigns, training and education, digital communications, networking, …, and research. Yes, research. In a policy research programme, research (analysis, literature reviews, case studies, systematic reviews, impact evaluations, randomised control trials, clinical trials, etc.) is a component of the overall programme; just like communications, capacity building, networking, etc, are components of the programme, too. Hence new research, like some of the activities of the other components, is not indispensable. It very well be that it could be possible to affect policy by focusing on using existing research and just promoting a public debate on a policy issue; or by improving the capacity of governments to make more informed decisions; or creating formal links between policymakers and experts; etc.
Similarly, it very well be that new research is absolutely necessary. In these cases, however, the research design cannot happen in isolation of policy influencing considerations. What kind of research is the most appropriate? ROMA can help decide what kind of research might be more relevant or useful to achieve the programme’s objectives. What questions should it answer? ROMA can help decide what questions need to be answered to develop the arguments that may influence the programme’s audiences. Should it be done collaboratively? ROMA can help decide. Who should we collaborate with? ROMA can help. What should be the outputs (products) of these research projects? ROMA can help. It can even help us decide who should be the researchers. I recall a case when a minister told me that the government had no problem with the research methods and conclusions but could not really use findings from the researcher who had carried it out. In another case, ROMA helped to avoid this situation. I say ROMA but of course I mean ‘a planning methodology like ROMA.’
The problem is that all these questions are currently decided before a discussion about the context, audiences, policy objectives, and the other components of the programme is had. Even the proposal writing process (and I have participated in many of these) is compartmentalised and often separates research from communications from capacity building from M&E. Each section tends to be drafted separately and then put together a few days before the deadline, the logframes are prepared at the last-minute, and all is then submitted to the donor. And all this is done before any real analysis of the policy context has been undertaken. I know this because whenever we come in to help with policy influencing the first thing we do is ask about this; and the answer is often the same: no. But by then it is too late.
Here is what I propose:
- Before developing a strategy the donor or the organisations bidding for the policy research programme should carry out a ‘baseline’ study of the policy they intend to affect. This could be a political economy analysis of the policy process, or a study of the discourses that shape it. It should identify the various players involved, their interests, objectives, their use of evidence (or not), networks, etc. AusAid has recently conducted a series of diagnostics of the knowledge sector in Indonesia that could serve as an example. The kind of studies that Emma Broadbent has carried out on policy debates is also relevant.
- This should help to clarify the policy objectives for the entire programme; they will be based on a realistic assessment of the context. Everyone this days seems to be talking about Theories of Change -but few base them on sound theories of how change actually happens.
- In turn, these should help to consider which players the programme is proposing to focus its attention on and how is it that it could influence them or contribute towards changing their policy behaviours. Contribution here is the key word.
- This focus should also help to decide what may be the most appropriate approaches, tactics, and tools for the programme to employ. And this will include, possibly, a research component. Depending on the audiences and objectives this may be very theoretical, a bit more practical, quantitative, qualitative, participatory, etc.
- The research component, when designed at this stage and not before, will benefit from having a baseline that explains what, how, and why research is used, a clear audience among the key policy players, clear objectives, and a good sense of what are the other approaches (tactics and tools) which will be able to support and use research. This research will be inevitably better linked to the whole programme and not an isolated component developed before anyone bothered to think about the context.
- Once the strategy is developed, and only then, the right team can be assembled. Today, bids are put together after the programme ‘partners’ and staff have been identified. The right order, however, is to find the right organisations and people for the job. It should not matter if they are in someone else’s team. Imagine if you hired someone and then checked to see what they could and could not do. This is the same thing that happens now.
- Finally, and key to all of this, the programme strategy should accept that this process needs to be repeated over and over again. As the programme is implemented new information will become available, new challenges will appear, new opportunities will unravel, etc. Therefore, new approaches may be more appropriate, new partners and staff may be needed, and old ones may have to be let go.
This is not advertising for ROMA. I do not really mind what planning approach is used. What I am arguing is that for policy research initiatives, planning research and planning policy influence should not separated.
M&E of research communications isn’t easy. Given the complexity of policy cycles, examples of one particular action making a difference are often disappointingly rare, and it is even harder to attribute each to the quality of the research, the management of it, or the delivery of communications around it. This blog outlines some of the lessons I’ve learnt in the process of creating the dashboard and investigating the data, a framework I’ve developed for assessing success, and list some of the key digital tools I’ve encountered that are useful for M&E of research communications.
In the world of evidence-based policy-making, we often struggle with its evil twin: policy-based evidence-making.
Proponents of the former dream of a world in which a problem is identified, those in charge commission rigorous research, several options are presented, the pros and cons of each are weighed and the best choice is enshrined into policy (with the assumption that said policy is then implemented and there is further monitoring and evaluation of its implementation to make tweaks and improve upon the policy).
Anyone familiar with policy making – whether within governments, companies, organisations or social institutions – will know that such an ideal policy cycle is far from reality. More often we see someone with a value or interest in a particular topic looking to build a case for a particular solution, which may become policy if the case is compelling enough. These issue ‘champions’ may be nefarious schemers looking after their own best interest, but case building is equally prevalent among advocacy organisations and politicians working for the greater good.
Case building is a conscious, active, purposive and biased process. Researchers like to pretend, of course, that bias is beneath them and that the scientific method and peer review guard against this. But research into human psychology would indicate otherwise. Even when trying our best to be analytically objective, the human subconscious falls prey to something known as ‘confirmation bias’.
Confirmation bias expresses itself in different ways among humans, but effectively it can be boiled down to a subconscious predisposition to find or give more weight to evidence/data/information that supports a pre-existing belief or pre-existing knowledge. In research this may mean seeking data that supports a given hypothesis (or conversely, ignoring evidence that doesn’t support the hypothesis). Don’t believe me? Try this little activity to see confirmation bias in action.
In the real world, this means that we often try to support our beliefs by going to information sources that already support our view. This is what makes political punditry of the likes of Fox News (on the right) and MSNBC (on the left) in the U.S. so compelling. They do the heavy lifting of interpreting reality to fit in with pre-conceived worldviews so viewers don’t have to.
Indeed it turns out that, not only do we tend to seek information that already agrees with us, but information sources (or knowledge brokers) that we can trust to do the same. And as Eli Pariser points out, the increasing use of advanced web technologies that use algorithms to filter out ‘irrelevant’ information means that people don’t even have to make conscious decisions to seek out trusted information – and might make it even more difficult for people actively trying to challenge their existing beliefs.
This poses a serious problem for those of us seeking to influence others to change attitudes and beliefs: how do we get inside a target stakeholder’s filter bubble? If it was difficult to do in real life, it’s that much more so when we’re trying to game an algorithm.
I suggest several strategies:
- Work through existing/trusted channels: I wrote previously on this blog about how new government regulations on communications (or policy influence and research uptake) emphasise using existing channels to communicate research – this post on confirmation bias should further support that view. If one suspects that target stakeholders are already going to certain news outlets, websites or information sources (online or off), use them – even if that means working with the ‘enemy’.
- Go with the grain: Given what we know about confirmation bias, it is probably unreasonable to expect that confronting people with facts and figures is going to do much difference (indeed it might make them more obstinate). To change opinions, we must work with and not against pre-held beliefs and those trusted opinion shapers to subtly shift the understanding of an issue. Instead of creating a fully formed argument to change opinion, try for messaging and channels that first lays a groundwork for openness to a message. Indeed it may even encourage information-seeking behavior – one type of confirmation bias is known as ‘Baader-Meinhoff Phenomenon’, where the mind might focus in on a particular piece of information and then begin to find it everywhere.
- Sew wild oats: Getting past filter bubbles will likely mean trying a few different strategies. Don’t just put a study onto a project website and call it good. Try to get plant information in a number of different guises (i.e. on different sides of the political spectrum) and formats (i.e. not just in the blogs or on a website, but also in the news, and even offline).
- S-E-OH?: The first rule of the web today is to make sure content is search engine optimised (SEO) so that the Googles and Bings of the world can find what you have to say. This is done through a bit of magic usually involving page titles, page headings, and what links back to your site. When publishing your content online, make sure that the first two items play well in more than one bubble. Or, if you’re feeling particularly clever, why not start two different blogs with similar content but framed for different audiences.