Skip to content

Posts tagged ‘M&E’

Altmetrics: the pros and the cons

Almetrics are a new form of measuring research impact by adding on a wider set of metrics to traditional bibliographic rankings based on academic journal citation analysis. While they are a very useful initiative in measuring impact beyond traditional scholarly output, they still have to show progress in certain areas.

Read more

Monitoring and Evaluation guide for health information

USAID published a Monitoring and Evaluation guide for health information products and services in 2007. It sets to describe what and how to evaluate health information products and services through a framework that clearly separates inputs, processes, outputs and outcomes. It seeks to measure their reach, usefulness, use, and collaboration and capacity building attributes, by quantitative, as well as qualitative methods.

This guide focuses on assessing how effective is the way in which information reaches out, by the largest number of possible means, and how it satisfies its users. The guidelines present a range of indicators and offers advice on how each should be measured, clearly outlining their definition, data requirement, data sources, purposes and issues, with examples given at the end of each indicator.

According to Lisa Gosling, the objectives of monitoring and evaluation are ensuring quality, accountability and learning for the benefit of good management. The guide has lists of “dos and don’ts” which encourage the user to be simple, clear and effective, while avoiding the confusion which often accompanies complex reporting.

M&E is seen as an approach to developing a working routine that can be easily traced and managed, thus letting internal (collaborators, directors and members of the organisation), as well as external users (partners, funders or supervising entities) to remain informed about relevant processes and activities.

Significantly, such as in the case of the research based evidence advocacy guide by Young and Quinn, this one also outlines theories of behaviour change and communication. Behaviour change, according to Fishbein et al., implies eight conditions, seen through variables observable on the person who displays a particular behaviour:

  • Intention,
  • Skills,
  • Environmental constrains (direct causes),
  • Attitude,
  • Norms,
  • Self-image,
  • emotion, and
  • Self-efficacy,
The latter two are expected to influence behavioural intentions.
More manuals and guides can be found here.

A pragmatic guide to monitoring and evaluating research communications using digital tools

M&E of research communications isn’t easy. Given the complexity of policy cycles, examples of one particular action making a difference are often disappointingly rare, and it is even harder to attribute each to the quality of the research, the management of it, or the delivery of communications around it. This blog outlines some of the lessons I’ve learnt in the process of creating the dashboard and investigating the data, a framework I’ve developed for assessing success, and list some of the key digital tools I’ve encountered that are useful for M&E of research communications.

Read more

Programming for complexity: how to get past ‘horses for courses’

Harry Jones, from the RAPID Programme, summarises and comments on a series of discussions that his work on complexity is generating. He tackles the ‘horses for courses’ argument addressing two important questions. Firstly, and most obviously: how do you choose the right horse for your course? Whether we’re talking about policy instruments, evaluation methods, or gambling on horses this will never be an easy question. Secondly: what are we arguing against? Implicitly ‘horses for courses’ is cast against a ‘blueprint approach’, where a few standardised solutions (whether tools, methods, or more generally types of programmes) are rolled out to be implemented in diverse contexts irrespective of contexts.

Read more

The taxi driver test: a new way of testing the relevance, usefulness, and stickiness of your policy recommendations

Coming up with the right policy recommendations is not easy. Most researchers (and for that matter, most think tanks) struggle with them. Often they are quite irrelevant to the mood or the current agenda; they are too vague or too obvious and present no clear way forward; or are just impossible to understand.

I have been in Lusaka for a bit over a week following up on research on think tanks that I had started earlier in the year and looking into how to help strengthen economic policy debate. I have been taking taxis around town and talking to the drivers about the upcoming elections (last time I was here, the date for the elections had not yet been set). I had been trying to ask them about the policy proposals from the various candidates but with little luck. So last Wednesday I tried a different approach. On my way to and from interviews here in Lusaka I decided to ask taxi drivers to tell me what they would want from their preferred candidates: If you had Sata (most will vote PF) in your taxi, what would you ask him for?  What three things do you want? (Then I extended it to 5).

It took a while but soon I got responses –and I was rather surprised by them. These were not thoughtless demands but rather well considered policy proposals or recommendations (often with a bit of background analysis for my benefit). Among the top policy recommendations I heard last week were (there are more or less in order):

  • More and better formal employment –particularly for the youth
  • Improve the quality of existing housing complexes and build more housing units to bring down rent costs
  • Lower fuel taxes to bring down fuel costs –and hence transportation costs and food prices
  • Improve the roads in Lusaka and in the countryside
  • Improve the quality of health in the country -hospitals need more medicines and to be better staffed
  • Develop a youth policy to give young Zambians a better education (address the barriers to access –cost, distance and lack of family income) and job prospects
  • Control the rise of food prices
  • Tackle corruption

Here is what I suggest think tanks do with this. It might sound a bit NGO-ish but based on the conversations I had last week I think you’ll be pleasantly surprised by the quality of the discussion that could talk place.

First, find a few taxi drivers around the political centre of your country –if you can do it more widely around your city then that would be good too (Ideally, you want some well-informed taxi drivers). Bring them over to your offices for a lunch meeting or maybe just do this as you travel across town. Ask each one of them to think of the top 5 policy recommendations they would have for the government (or what they would ask the president for if he/she rode in their taxi). What should the government do? Then aggregate their recommendations to try to get a list of 10 (top) (maybe counting their frequency or having them debate them until they reach a consensus).

It would also be a good idea to print them out on a single sheet of paper and give each driver a copy. They can then take them to their ‘landings’ or taxi ranks and share them with their colleagues and passengers. Maybe the passengers can sign them if they agree with the ‘taxi manifesto’ and slowly turn this exercise into a petition. I suggested this to the taxi drivers here in Lusaka and they seem to have liked the idea. Is this the beginning of a taxi union?

Anyway, as a think tank you could use this list and your taxi drivers in the following ways:

  • The relevance text: Take a look at your research and analysis agenda and see if it addresses any of these issues. If it does, then you are likely to be relevant. Taxi drivers have a particular knack for keeping up with current affairs and they are, everywhere in the world, great indicators of what opinions and views are in and which are out. You should repeat the exercise every once in a while to make sure that you are not too far from public opinion. Of course, I do not suggest that all your work should focus on these issues –some have to be below the radar and maybe even too farsighted for taxi drivers to know about. (Remember that demand driven think tanks are really just research consultancies.)
  • The usefulness test (this was originally recommended by Emma Broadbent): Ask your self if the recommendations of your think tank’s research and analysis go beyond what taxi drivers have recommended already. If your researchers say that the government should ‘Improve the quality of existing housing complexes and build more housing units to bring down rent costs’ then you could tell them to get another job and replace them with the taxi driver who came up with the same recommendation. Your research should lead to recommendations that go beyond those made by the taxi drivers: How can the government improve the quality and quantity of housing? After all, how useful can you be if you cannot answer this?
  • The stickiness test (which includes a series of retweet questions): Once you have your recommendations, hail a taxi (or get the researcher to do it if you did not do the work) and ask the driver to drive you to the most relevant ministry or policymaking body. Here is the test: Can you explain your recommendations to the taxi driver in the time that it takes to get there? If you can’t then you may need to go back to the drawing board. (To make sure that think tanks based in a different city or in cities with lots of traffic do not get an unfair advantage, let’s use Lusaka as the standard: about 10 – 15 minutes.) The retweet test is this: After you have explained them, 1) ask your taxi driver if he would tell his peers about it; 2) get your taxi driver to take you to his landing or to a taxi rank and see if he can explain it to other drivers; 3) if they understand them, then ask see if one of them can repeat it back at you? (Don’t worry if they do not use the ext words you used. In fact you don’t want them to. You want them to incorporate your ideas into their own arguments and ways of speaking.) If the answer is yes in all cases, then you’ve done well. If the answer to any of these is no, then the chances that your message will get anywhere beyond your usual circle of friends and colleagues is slim.

‘Shares’ as an indicator of influence

If I read a good paper or listen to a good argument and take it away with me I could say that I’ve been influenced. But how does the author of the paper or source of the argument knows? Now, if I take their paper and share it with someone else, or if I pass on their arguments to my peers, that may be seen as clear indication of influence. I would not share something I think is poorly articulate -or simply plain wrong.

While it might be hard to find out if everyone who reads this blog is influenced by it, I think I can safely say that most of those who chose to share its content with others were. At least they thought that the post or blog was worth passing along. They were willing to put their name to it.

When designing websites think tanks should make sure that they can trace shares for their studies and outputs. Platforms like WordPress make it easy and add the ‘sharing’ buttons. Twitter has a function to see if your tweets have been retweeted.

None of these (and I am sure there are others) cost anything and can be a very useful tool.

This blog has 220 subscribers and 305 Twitter followers. I usually forward each post to a few online communities. About 150 people per day visit the site (sometimes more, sometimes less). But the following posts have been ‘shared’ from the site. Next time I’ll have a look at the top shares in Twitter and Facebook

Top Posts & Pages

These posts on your site got the most shares

Title Shares
on success from TED by Alain de Botton 8
A new think tank model: a focus on productive sectors 7
Evaluation reading list, contacts and resources 6
The onthinktanks interview: Simon Maxwell 5
Independence, dependency, autonomy… is it all about the money? 4
Think tank directories and lists 4
The Standard: Africa home to only 2.3 per cent world’s researchers 3
Impact of Social Sciences: Maximizing the impact of academic research 3
Speed Dating for think tanks: how to meet your future partner? 3
Information, confirmation, and influencing advice 3
Got resources? Think tank them 2
Information Dissemination: Think Tanks, the Media, and the Future of Ideas Distribution 2
Getting Better at Strategic Communication advice from RAND 2
What is the role of the intelligence services? And think tanks? 2
Ideology trumps facts -but facts still matter 2
Policy analysis and influence: researchers or communicators? 2
Manuals 2
Understanding and supporting networks: learning from theory and practice -May 5 2
Different ways to define and describe think tanks 2
‘Think tanks are becoming bland’ from The Guardian’s Comment is Free 1
‘I predict a riot’ -and then explain it 1
Think Tanks and politics/ Think tanks y la política 1
Working Papers are NOT Working 1
An underappreciated benefit of experiments: convincing politicians when their pet projects don’t work | News, views, methods, and insights from the world of impact evaluation 1
Impact evaluations, research, analysis… what is the difference? 1
Think tanks: research findings and some common challenges 1
An unlikely path to aid: Paying to set up think tanks – Doug Saunders 1
What makes a successful policy research organisation in a developing country? Review of Ray Struyk’s latest paper 1
For the 21st Century think tank: mobile data collection and research tools 1
Theories of change: an annotated review of documents and views 1
Corruption free think tanks 1
Contributors 1
When evidence will not make a difference: motivated reasoning 1
Online Course: How to build a policy influence plan 1
more on how to present research 1
on how to organise and present a think tank’s research 1
“Sea Turtles” or “returnees” behind China’s think tank growth 1
Evo, think tanks and policy in Bolivia 1
Think tanks and policy makers in Argentina 1
Why Think Tanks are More Effective than Anyone Else in Changing Policy 1
The rise of conservative think tanks in the U.S. marketplace of ideas 1
Lists and manuals 1
How think tanks change public policy – the Overton Window of Political Possibility 1
Call for proposals for experiments in using evidence for policy influence in South Asia 1
Handbook on monitoring, evaluating and managing knowledge for policy influence 1
After the uprising Egypt will need solutions: bring in the think tanks 1
Another year, another ranking of think tanks (and surprise surprise, Brookings is still the best) 1
on some of Goran’s musings 1
Conformity and groupthink: a tool for think tanks or a danger? 1
Whose money is it anyway? think tanks and the public: an Indian debate 1
A quick poll on the perception of think tanks 1
Ezra Klein – Giving is personal. Make it political. 1
on the definition of think tanks: Towards a more useful discussion 1
Right Thinking, Big Grants, and Long-term Strategy 1
About 1

Impact evaluations, research, analysis… what is the difference?

When it comes to policy influence, what is unique about impact evaluations in relation to other types of research? Let me explain why I am asking this question. When I was in RAPID (and still) I were asked to help organisations to develop policy influencing strategies. Some times, this help came in the form of a workshop, but other times it was provided over a longer period of time though mentoring and support. Almost every time, the clients would ask for lessons tailored to their own contexts -which ranged from the politics of international donors to local NGOs, or working globally or regionally or nationally, etc. This often meant that they wanted case studies from their region (e.g. Africa) or the sector they were working in (e.g. health).

Now, RAPID does not tend to advice HOW to influence but HOW TO DECIDE how to influence -there is a difference (although the communications team does help with some more practical aspects of this). So we have always expected that the context will be provided by the organisation that we are working with; and that decisions about what specific influencing approaches to follow will be also theirs. This might sound like a cop-out but in fact it is an honest approach: we cannot possibly claim to be experts on every context and sector (we work with organisations all over the world that in turn work in a range of sectors). And in any case, we had to assume that those we worked with knew their context enough -this, we found out, was a fairly naive assumption in some cases.

So to deal with this demand we tried to provide support in a way that would allow the client to present, up front, as much contextual and content knowledge as possible. And to do this, we provided some tools (but this is another matter).

Although the planning process proposed is applicable to all sectors and contexts (except that it may not be possible or necessary to follow all steps or be as detailed in all situations) I accept that influencing in Africa (and in each country) is different than in Latin America -and in health policy it is likely to be different than in education policy; and so on. But it is also different to influence as a research centre as it is to influence as an NGO; and so on. So focusing on context and content issues may be in fact misleading.

Recently, however, we have been asked to tailor-make our planning approach (the RAPID Outcome Mapping Approach) and recommendations on HOW to influence to impact evaluations. Behind this demand is the assumption that policy influencing based on the findings of impact evaluations is different from policy influencing based on the findings of other types of research.

So how different is it to influence from one type of research than from another?

My view is that this question is not relevant -certainly not useful. I will provide my reasons below but let me also ask for your input. If you can demonstrate (or argue, because I am not demonstrating anything) the opposite, please do so; this is an open debate.

To start the debate, let me provide four reasons for my view:

  • Argument not evidence: I have already used this before in this blog but I think it is still a relatively new idea in the research-policy linkages community. Policy (or programme or project -or more broadly, behaviour) does not change because of a single piece of evidence. Change happens because new (or improved) arguments are convincing enough to affect someone’s beliefs, assumptions, premises and actions. These arguments are made up of a number of elements, for instance: evidence (from different sources), appeals (to ideology, values, rights, laws, interests, etc.) and imaginary (metaphors, stories, etc). These elements are put together into an argument. And so, even if the findings of impact evaluations are used, this is unlikely to be the only type of evidence and it is not possible to separate it from the argument as a whole.
  • Credibility is in the eye of the beholder (or ‘any evidence is just evidence’): There is a view that impact evaluations are different from other types of research -that they are the gold standard of evidence. The scientific rigour involved in an impact evaluation, its proponents argue, set it apart from all other methods. This may be true. Impact evaluations may be more reliable than other methods, but when it comes to influencing this only matters if (and only if) the person or people being influenced agree. And if they do, then, if anything, influencing will be easier and therefore there is even less of a need to focus on differences or come up with lots more specific examples.
  • There are few full-time impact evaluators -and impact evaluation centres: While some people and organisations may be specialising on impact evaluations most researchers do a bit of everything. Impact evaluations are just one more type of research they have to carry out on a normal year. And the same is true for the organisations that they work for. As a consequence they do not just communicate impact evaluation findings. Therefore, the idea that they would have or be able to specialise on one particular type of influencing (based on the source of the evidence) does not seem to make much sense.

So, not only are impact evaluation findings tangled up with the findings of other types of evidence and other non-evidence components of a good argument, they are also, whatever their scientific rigour, not necessarily seen as any different (or better) from other types of evidence by those being influenced (although so do). And to top it off, those attempting to influence are not necessarily impact evaluation specialists and therefore cannot possibly develop impact evaluation only based strategies and ‘other sources of evidence’ based strategies to implement separately.

The fourth reason is more fundamental:

  • The hundreds if not thousands of cases gathered by the literature have given us a great deal of lessons (common sense, really) that are relevant to all cases. A lesson does not imply that one should necessarily behave in a particular way, though. For instance, a lesson may be that working with the media can help to open up the debate  -but in many cases opening up the debate may not be desirable. This does not negate the lesson but in this particular case it is just not applicable. The usefulness of impact evaluation specific lessons may be in the actions that they suggest but in helping to communicate with impact evaluators and to convince them of the importance of planning for influence. In other words, the lessons from impact evaluation cases may be used as part of an argument employed on the researchers themselves. But whether they will be useful (more useful than lessons from non-impact evaluation based cases) or not is not relevant. 
What do you think?
  • Is there anything about impact evaluation findings that make influencing strategies (and actions) different from those where impact evaluations have not been used?
  • Is it useful to talk about impact evaluation based influence and non-impact evaluation based influence?
  • Is it worth the effort? Can we not learn from any case?

Think tanks: research findings and some common challenges

These are my notes from a presentation for a SMERU Seminar that I gave in Jakarta, 31st May 2011-05-31

(It includes some of the additional information provided by participants during the event; and I’ll add a video and audio as soon as I figure out how to do it.)

Evidence versus ArgumentWhat faux engagement initiatives lack is any content to inspire and engage the public’s minds and passions. Historically, what has moved millions to act upon the world and change things for the better has been big ideas, such as freedom, progress, civilisation and democracy. Today we are offered the thin gruel of ‘evidence-based policy’. When we are told that scientific research demands particular courses of action, ever increasing areas of politics are ruled out-of-bounds for democratic debate; ideas and morality are sidelined by facts and statistics. In contrast, the Battle of Ideas is a public square within which we can explore the crisis of values, and start to give human meaning to trends too often presented fatalistically and technically. Claire Fox, director, Institute of Ideas and on behalf of the Battle of Ideas Committee 2010

This presentation was aimed to first outline some findings emerging from my research on think tanks in developing countries and then pose some questions, common to many, as a way of encouraging a discussion.

The first obvious questions most people have is: what is a think tank? The literature is not absent of options: think tank definitions can be divided into broad and narrow ones:

  • The broad definition: any organisation that produces or uses research (broadly defined as well) to inspire, inform or influence policy. (To use this definition you will have to make some decisions over whether an organisation can in fact be labelled as a think tank or not. However, it is possible for there to be think tanks in universities, the government, the private sector and for other types of non governmental organisations to fulfil those roles.)
  • The more narrow definition: an organisation not governed by the rules of academia, policy, the media or the private sector and that seeks policy influence through research (also broad) informed arguments.

In both cases the organisation may or may not have an identifiable ideological affiliation (which is a contribution from Braml 2004). Think tanks then may or may not be entirely separate (and autonomous) from the State, the private sector, political parties, professional/business associations, universities or other types of civil society organisations, etc.

The notion that a think tank requires independence from the state (or corporations) in order to be ‘free-thinking’ is an Anglo- American norm that does not translate well into other political cultures. Increasingly, therefore, ‘think tank’ is conceived in terms of a policy research function and a set of analytic or policy advisory practices, rather than a specific legal organizational structure as a non-governmental, non-partisan or independent civil society entity. Diane Stone (2005)

This, however, does not mean that definitions or descriptions of think tanks in the Anglo-American tradition are not useful.

A possible characterisation based on type of organisation (from various authors) that may address initial questions of whether an NGO is a think tank or if a consultancy s a think tank is the following:

  • Independent civil society think tanks established as non-profit organisations (Stone 2005) –ideologically identifiable or not (Braml 2004)
  • Policy research institutes located in or affiliated with a university (Stone 2005)
  • Governmentally created or state sponsored think tank (Stone 2005)
  • Corporate created or business affiliated think tank (Stone 2005)
  • Political party think tanks (Stone, Braml, and others) and legacy or personal think tanks
  • Global (ore regional) think tanks (with some of the above)

Other ways to classify them include categories or types of think tanks, described by:

  • Size and focus: e.g. large and diversified, large and specialised, small and specialised (Weidenbaum 2009)
  • Evolution of stage of development: e.g. first (small), second (small to large but more complex projects), and third (larger and policy influence) stages (Struyk R. J. 2006)
  • Strategy, including:
    • Funding sources (individuals, corporations, foundations, donors/governments, endowments, sales/events) (Weidenbaum 2009) and business model (independent research, contract work, advocacy) (Abelson D. E., 2006 Abelson D. E. 2009, Belletini 2007, Ricci 1993, Rich 2006, Reinicke 1996, Smith 1991, Weaver 1989, Braml 2004)
    • The balance between research, consultancy/advisory work and advocacy
    • The source of their arguments: Ideology, values or interests; applied, empirical or synthesis research; or theoretical or academic research (from a conversation with Stephen Yeo)
    • The manner in which the research agenda is developed: e.g. by senior members of the think tank or by individual researchers; or by the think tank of their funders (Braml  2004)
    • Their influencing approaches and tactics (many researchers but an interesting one comes from Abelson D. E. 2009) and the time horizon for their strategies: long term and short term mobilisation (Ricci 1993) (Weidenbaum 2009)
    • Their various audiences of the think tanks (audiences as consumers and public -this merits another blog; soon) (again, many authors, but Zufeng, 2009 provides a good framework for China)
    • Affiliation, which refers to the issue of independence (or autonomy which may be a better concept to focus on) but also includes think tanks with formal and informal links to political parties, interest groups and other political players (Weaver 1989, Braml 2004, Snowdon 2010)
  • Relational definitions that refer to the self-identification as think tank in relation to other organisations that may play similar, overlapping or complementary roles.
  • And functional, focusing on the functions played by think tanks and including (taken from quite a few authors but particularly Belletini 2007, Mendizabal & Sample 2009, Gusternson 2009, and Tanner 2002):
    • Providing ideas, people, access
    • Creating, maintaining, opening spaces
    • As boundary workers or windows into the policymaking process -and into other spaces (this comes from the literature on think tanks in China where think tanks are described as windows that allowed Chinese policymakers to look into Western policy communities and societies -as well as allowing western policymakers and scholars to look into Chinese policymaking communities.
    • Channels of resources to political parties, interest groups, leaders
    • Legitimising ideas, policies and practices -and individuals or groups
    • Monitoring and auditing public policy and behaviour
    • Public and elite (including policymakers) education (something often forgotten by many think tanks as it is certainly difficult to assess its impact).

I have, for now, left this definition open and am attempting to find one as I continue my research.

These descriptions (and our understanding) of think tanks are affected by different (competing) stories or narratives within which think tanks have been promoted and studied (Ricci, 1993):

  • Salomon’s House – makes us think of elites, commissions, expert (private) advice, public intellectuals, and analysis of influence networks. Think tanks play an important role –as part of or as a tool of elites.
  • The Marketplace –makes us think of efficiency, value for money, supply, demand and intermediaries, and demand supply type of analyses. Think tanks play a more limited  (producer –and sometimes intermediary) role mediated by the degree of intervention of the State.
  • The Great Conversation –make us think of public debate, public education, transparency, and (advocacy, epistemic, professional, social, political, etc.) networks. Think tanks play many changing, emerging, relational roles.

Within these narratives, although the first two are more prominent, think tank formation and design has been driven by a number of metaphors inspired by other disciples and professions:

  • Health –first symptoms (small organisations or associations) then causes (larger organisations)
  • Physics –efficiency (more quantitative)
  • Engineering and architecture –design, project planning and control (Logframes and modelling)
  • Foundational –break from the past and build new institutions (qualitative studies of societal change and formulation of new visions)
  • Marketing –hearts and minds in political influencing, audiences instead of publics, linked to the story of the marketplace of ideas (communications and outreach)
  • Health again –randomised control trials is the only evidence that matters (new skills, more academic)
  • Ecosystems – merging problems and solutions (more flexible, diffuse, networked organisations)

Tension/cycles between technocracy and democracy: The drive to set up think tanks is commonly driven by a belief that science (and expertise) can solve the ills of society. The development of the social sciences and the introduction of evermore-complex quantitative methods fuels this quest for technical solutions. Every once in a while, however, ideological imperatives return to leave a mark and respond to people’s natural disposition and need for ideologically inspired deliberation (until the next technocratic phase sets in). Have a look at the box: Think tanks: simple models, complicated reality in the article Stephen Yeo and I wrote for The Broker for a brief description of how different waves of think tanks have been driven by technocratic and ideological imperatives.

With this in mind, some initial findings and thoughts emerging from the literature and visits to think tanks in the UK, Latin America, Africa and Asia that I want to present at this stage are the following (this are still rather loose ideas and so I expect (ask for) feedback):

  • Funding –more important is the type (endowment, core, project) than the amount. (By project I mean specific activities defined in a contract with a client: they may be for research or for implementation –capacity development, networking, etc.)
  • Independence –more important is autonomy to choose any course of action and affiliation than the quality of research (is the think tank proposing what to do or responding to requests?).
  • Quality of research can provide credibility but then this is far more dependent on the ideological biases and perceptions of the user than on data quality or methodological rigour.  So quality does not guarantee a perception of credibility or independence.
  • Independence seems incompatible with consultancy/project funding
  • Think tanks are not supposed to be financially sustainable -demanding that they should, places additional pressures on central/administrative activities and costs (management, accounting, communications, human resources, etc.) and distracts from core think tank functions. Funding from a non-user or non-primary target audience is inevitable.
  • International donor’s political constraints promote a sanitised version of think tanks that is not common in their own countries (i.e. ODI deals with marginal politics while SMERU deals with mainstream politics –SMERU should be treated as if it was the Centre for Fiscal Studies or the Institute of Government, which may not be partisan but are clearly in the think of it)
  • Most think tanks in developed countries are small -5 people- and shrink and expand depending on political and economic circumstances. They much larger in developing countries and their structures tend to be more rigid. Again, I think this is influenced by their links to international development think tanks which are (somewhat) sheltered by the political and economic swings that affect their more mainstream cousins: DFID’s funding to UK based international developing think tanks is rather constant (and increasing), while at the same time progressive and conservative think tanks have ballooned and shrunk many times in the last 10 years.
  • There are some apparent context contradictions that challenge our assumptions: strong and large states do not necessarily constraint think tank formation –what matters is the value that knowledge has within the ruling class (Germany, Mexico, China, for example). Political debate and contestation seems a far more influential factor for think tank formation –even if only within the State (or the single party).
  • Developed country models are perfectly relevant to developing country situations – but context gets in the way of like for like comparisons
  • Think tanks have multiple roles or functions –they don’t just do research (many don’t do any) or inform/influence policy, but also audit or legitimise policy, train future cadres of policymakers and policy analysts, support or mobilise resources in favour of specific political, social or economic interests, create and maintain public spaces for debate and reflection, educate the public and the ruling classes, challenge the status quo, etc.
  • The balance has a lot to do with its funding and objectives: domestically funded and focused think tanks will be likely to be far more active in on-going political and economic analysis and provide more opportunities for the movement of ideas and people with political forces. Internationally funded and focused think tanks will be more likely to focus on research (although when the funder is an international NGO the focus is likely to be on advocacy-related type of roles)
  • It is rare for all staff in a think that to know what are its core roles/objectives and/or to agree on them. This can have negative effects on the organisation’s cohesion.
  • There is an increasingly blurring boundary between think tanks, NGOs, consultancies, universities and publicly funded research institutes –and in some cases the media.
  • Additional competition comes from donor funded stand-alone programmes –focusing on a particular policy issue but set up as independent outfits or ‘partnerships’ between various local or international organisations.
  • The web is not there yet as a key space for engagement but it will be –or it will affect the think tanks in the near future
  • Competition is seen, in some places, by both think tanks and funders, as a bad thing: “Why do we need another economic policy think tanks? We’ve already got them” is very common.

More specifically, some barriers and opportunities to think tank formation and development are also common across many contexts:

  • Barriers to think tank formation (and think tank community development) (which are likely to change within a country depending on the particular characteristics of the state, the private sector and civil society –
    • Low quality and availability of human resources (related to low tertiary education levels and poor career progression potential)
    • Lack of funding from domestic sources (public sector, private sector, individuals)
    • Discouraging legislation (NGO law, procurement law, labour law, tax law, access to information, etc.)
    • Low interest in public policy debate (and absence of ideological debate in particular) that turns them into consultancies
    • Poorly developed (or unsupportive) democratic institutions (the State –government, legislative, judiciary; political parties, the media, CSOs) –although weak parties tend to create space for think tanks
  • Opportunities for think tank formation and development –

And there are, of course, many common nightmares/questions that think tank directors and managers across the world face (some include comments from the participants at the SMERU event):

  • Funding –nobody (except donors) wants to fund ‘independent’ think tanks, but how independent are we depending on projects?
  • Should we continue to let the evidence speak for itself or help it with a more convincing (but not necessarily evidence based) argument?
  • Academic or popular/current communications? Both? (But who will pay for it?)
  • Should we focus only on the policymakers that matter or fulfil a general public education role? Does this mean that we need to adopt a political agenda? (Those with a political agenda do not tend to worry about this)
  • What should we do about of website? (To which I would reply: focus on an online communications strategy and not just the website.)
  • How to be a friendly opposition?
  • PhDs or just good all-rounders (researchers who are also good storytellers, networkers, managers and fixers)?
  • How large should we be? Full time staff only or should we also work with consultants and associates?
  • How to attract and retain highly qualified staff –especially mid-career researchers with some research and some policy experience?
  • What kind of economic and non-economic incentives can be used to attract the right kind of staff to our think tank? Is it just money of are career prospects, access to key policy spaces and people, opportunities to learn new skills, etc. equally important?
  • How to evaluate and assess our impact? (Should we even bother?) And impact on what? Policy, debate, knowledge?

This is clearly not exhaustive. There are plenty more questions and emerging findings in my notes. But for now, maybe, they are a good start. They certainly encourage a great deal of questions and interest among those present.

15 Tips for Effective Communication -from Philanthropy411 Blog

Kris Putnam-Walkerly outlines some key tips for developing an effective communication strategy -internal and external. Here are her headlines -the full article is here: 15 Tips for Effective Communication.

First of all she argues that a strategic communication plan should have:

  1. An internal communications plan
  2. An external communications plan
And they should include the following 13 components:
  1. Measurable goals and strategies
  2. Target audiences
  3. Identification of the message “frame”
  4. Key messages and persuasive strategies
  5. Opportunities and barriers for reaching key audiences
  6. Communications activities
  7. Communications vehicle
  8. Crisis communications
  9. Implementation plan
  10. Monitoring and evaluation
  11. Timing considerations
  12. Staffing
  13. Budget

There are a few more of these guidelines in the manuals and toolkits section of this blog.

Value for money

Rick Davies has published an ‘A-list’ of documents relating to value for money in international development -although this is perfectly relevant for domestic policy concerns.

One aspect of this analysis that I think is missing is the issue of relevance of the intervention (organisation, project, programme, policy) being assessed. Ironically the focus on measurement means that measured decisions are overlooked. What do I mean by this?

A measured decision (or course of action) is one what has taken care to consider all options, look at similar experiences, consult with well informed people, undertaken the necessary preliminary research and baselines, identified key payers, opportunities and bottlenecks, etc. before making a decision on the activities, strategies, programmes, policies to be pursued.

So, if an organisation manages to have a huge impact on the web at a relatively low cost BUT we find that online communications are irrelevant to influencing the behaviour of local chiefs in Sierra Leone then -regardless of how cheap the online strategy was, the number of hits and downloads, etc.- the whole strategy cannot be value for money.

Or if a programme manages to quickly change a policy but the policy is simply unimplementable -and therefore likely to lead to corruption- then how can we say it was good value for money? It should not have even been attempted.

Anyway, enough from me: over to Rick:

[RD comment] Is “Value for Money” becoming anything more than a meaningless mantra? Sounding important, but in practice meaning something different to each and everyone who hears it? And impossible to measure…?


Get every new post delivered to your Inbox.

Join 7,084 other followers