Skip to content

Posts tagged ‘IDS’

Corporate websites: do we need them?

Earlier in 2012, Nick Scott wrote about the decline of the corporate website but then went ahead to redesign ODI's site. IDS also redesigned and relaunched their website. And so did IIED. (On Think Tanks did, too) So, are they in or are they out?

Read more

Is it wrong to herald the death of the institutional website?

James Georgalakis, from the Institute for Development Studies, argues that institutional websites can demonstrate credibility and allow users to explore the organisation's products and services, but must always be clear and geared towards the users' needs.

Read more

“A policy brief is a piece of paper. It doesn’t DO anything on its own”

 

[This post has been updated]

Quite some time ago, Jeff Knezovich reported on a study that was due to be published: Should think tanks write policy briefs? In this post he wrote something that we should all keep in mind:

A policy brief is a piece of paper. It doesn’t DO anything, and is therefore unlikely to have impact on its own.

This is something I try to remind people of all the time when discussing policy influence and research uptake. In fact, most communication outputs by themselves probably aren’t very impactful. It is a shame, then, that these outputs tend to be listed as deliverables in contracts with funders and thus tend to become viewed as an ‘end’ rather than a ‘means to an end’.

Well, the paper is out: Can a policy brief be an effective tool for policy influence?

3ie and the Institute of Development Studies (IDS), in collaboration with Norwegian Agency for Development Cooperation (Norad), explored the effectiveness of a policy brief for influencing readers’ beliefs and prompting them to act.

A multi-armed randomised control design was used to find answers to three research questions: Do policy briefs influence readers? Does the presence of an op-ed type commentary within the brief lead to more or less influence? and Does it matter if the commentary is assigned to a well-known name in the field?

In response I posted a rather long email on the ebpdn discussion board. I repost it below with some edits:

I do not want to always be the one to say the obvious (and this probably won’t make me any new friends, I am afraid) but was all this really necessary just to conclude that:

  • Policy briefs have to have clear messages;
  • That people who are well-known and respected are more likely to be listened to than people that nobody knows about;
  • That design matters; and
  • That they should be targeted at the people who matter?

As Jeff’s blog shows IDS has known all this for quite some time. Even without an RCT, I am sure that its communications and outreach staff knew what they were doing. All of these ‘lessons’ are also key components of ODI’s research communication workshops (which go all the way back to 2009!) and the communications team there have had how to guides for much longer that say all of this and much more. CIPPEC and other think tanks working on the subject have been saying and doing this too.

RPC advocates might say that before this study we did not really KNOW any of this; we just thought we knew. But, did we really have to KNOW this?  (In fact, as I will argue below I do not think the study is robust enough for us to really KNOW if policy briefs work, why and when.) Surely there are other things that are more important and that we do not know about for which RCTs can be really useful… but all of this for a simple piece of paper?

I had a few comments to make on the paper:

First, the theory of change for a policy brief presented in the study is a perfect example of a very important confusion. There is a mistaken assumption made that evidence based policy and policy influence mean the same thing. This, as Emma Broadbent showed in her series of papers on the political economy of research uptake in four policy issues in Sierra Leone, Uganda, Ghana and Zambia, and Kirsty Newman latter blogged about, is just not the case. Evidence based or informed policy and policy influence are two very different things.

Second, there seems to be an assumption that, as in medicine, the reader-patient of the policy brief has no other influences-treatment. It assumes that there are no other forms of communication affecting his or her ideas. If the intention was to test the effectiveness of a policy brief surely the treatment should have included other types of communication channels and tools to see which one had the most significant effect (on its own or in combination which others -but which others? There is no standard mix of communication tools that is used all the time by all organisations in all contexts). Now, that would have been truly interesting. But then how does one create a comparable situation among all the cases? A control? Some people would have had to be isolated from all other sources of information for the duration of the study.

In a way they did include another tool, an opinion piece, as part of the possible mix. But since they did not just present the opinion on its own, how do they know then that the opinion of the respected researchers would not have been enough to influence the patient? Again, that would have been an interesting finding.

Third, the choice of topic of the policy brief presents another problem. The thing is, well, nobody really cares. Of course nutrition is an important issue but out of all the people who responded, not all work on it And even for those that did, how ideological is this issue for them? For researchers in developing countries the real challenge is not in the more technocratic issues like these. In these case the absence of change even if the evidence is clear is more likely down to lack of motivation or capacity to change and implement recommendations than poor communication of the findings. The challenge is in the ideological issues. Those value-heavy policy choices that the developed world deals with on a daily basis but that international development community tends to dismiss in the developing world. It would have been more accurate to test the effects of policy briefs that dealt with free trade agreements, subsidies, the benefits (or costs) of large mining projects or the privatisation of water? Now, that would have been interesting.

The paper is full of limitations; and most are described at length in it. A key issue that comes up in the analysis but does not seem to merit a mention in the section on limitations is that the policy brief focuses on a specialist subject about which not many people know that much about. So really, all that should matter is if the people who do know about it change their mind or not. It makes little sense to see if someone who knew little or nothing about something changed their mind after reading about it. Of course they would. Policy briefs are meant to target informed people -interested, too. They are never expected to, on their own, convince someone of something they disagree with or to get someone who does not know anything about an issue to get exited about it.

So of course one would expect an increase in the number of people who say they believe what the brief says and that the strength of the evidence has increased after reading it if they did not know anything about it before. And of course people who already knew about it would not be likely to dramatically change their beliefs or opinions about what they already feel they know. But again, a policy brief is not there to change people’s minds but to inform them of a course of action. And in fact this is what the study found.

It seems to me that the study ought to have been clearer about what a policy brief is and is not for. And maybe it should have been clearer that  a policy brief is never published alone. If a policy brief is put out as a stand alone output then this should be seen as a failure in communication and so it does not seem appropriate to encourage research centres to do so by suggesting that all may be ok as long as they follow the recommendations of this paper.

The format of the policy brief is also important. The test used in the RCT is based on an IDS format -which I have no issues with (except that 4 pages may have been better than 3). But, as I found in a review of a RAPID project to help IDRC programmes in Latin America, Africa and Asia to develop policy briefs back in 2008 (if memory serves me right) different organisations have different views of what a policy brief ought to be. This is because different academic communities have different writing styles and expect different things from their researchers. Policymakers, who have been part of those communities at university, too, expect different things from researchers. So the 3 page (which by the way, is odd; ideally make them of 2, 4, 6 pages, even numbers so they can be easily printed and folded) policy brief may work alright in the UK but might not work in Vietnam or in Egypt. And the writing style used by the paper may excite some but baffle others.

An important question to ask about the effect of the tool is what it audiences did as a consequence of reading it. On this, nothing seems to be particularly new. The literature, and every communicator in the world knows that things that take more effort are less likely to be done by the recipient of a message. We all know that. In fact, that is the basis of Outcome Mapping’s Progress Markers: first reactions, then more active participation and engagement, and finally taking on the initiative and more transformative changes. Maybe they did an RCT we are not aware of.

Effort, of course, is linked to the power that people have; another ‘big’ finding of the paper. And power is likely to be linked to their education level and position in their organisations. So it is rather obvious that more powerful individuals will be more likely to act on the recommendations of the policy brief in ways that require resources and other people to do things for them.

Finally, the authors have something to say about the power of influential people/names. They argue that the reputation of the messenger can convince the reader. Well, in page 73 the authors mention a study by RAPID that contains a survey related to researchers’ view of the role of evidence in policy and the value of policy briefs; and they use its findings as undisputed fact. Unfortunately, they did not check the survey. If they had they would have found that the survey was far from representative of all researchers (I am sorry but 200 or so researchers from a an ODI/SciDev mail list is not representative of all researchers in the developing world) and so the correct way of expressing Jones’ and Walsh’s opinion (because that is all it can be said to be) is that half of the respondents to their survey (which was terribly biased to people we knew) were of the view that research communications are of poor quality and that policy briefs could help. So, certainly in this case, the authority effect worked here. Priceless.

In the end the paper does not tell us if a briefing paper works (without caveats), why it works and when it works. This is what one needs to know. Does it work best at the end of a study? To set the agenda? To advice on implementation? Does it work best when combined with blogs, opinions, videos, a working paper, a good presentation, personal networks? Is it better emailed or delivered personally? Is it better if it is 1 page? 2 pages? 4 pages? 6 pages? What about the writing style? How many tables should it have? What colours are the most appropriate? Surely the effort to test once and for all if a policy brief works should lead to answer some of the questions. Otherwise, I think I’ll stick with common sense.

I am not trying to discourage the use of RCTs but I do feel that this is a bit too much. None of this is new and it did not need to be studied in this way. And I think that the medicine metaphor has been taken too far, as well. Of course, do not take my word for any of this. Do, please, read the paper and make up your own minds.

My reaction is in part fuelled by the fact that the cost of the study would have paid for a very good communications expert to work for a year at a think tank in a developing country. In fact he or she could have helped a few think tanks in that country using the various channels and tools available to them and, most of all, using experience and common sense. The USD20k plus (I must say that this full disclosure is worth mentioning and applauding) spent could have, for example, paid for a couple of years of running the ebpdn in Latin America -which now does not have an active facilitator. It could have funded several study tours, a regional conference like the one organised in Latin America last year where about 40 think tanks came together to learn from each other and present original papers and ideas. I am sure you can think of other better ways of using this money.

As I have in the past recommend researchers and think tanks to just get on with it. Do not wait for the RCTs on blogs, opinion pieces, videos, twitter, working papers, etc. There are lots of ways to communicate research, and all you need to do is pick the mix that works for you and your organisation (for no other reason than it makes sense and you have actually thought about it and that it is within your reach), and then make sure you do things right. If you need help there are people (like Nick ScottJeff Knezovich, Vanesa Weyrauch, Laura Zommer or Lawrence MacDonald and others) out there thinking about this and who I am sure willing to lend a hand and provide some thoughtful advice. But remember that a poorly written briefing paper, a press release a day too late, a busy and static website, a boring event, or a poorly scripted video will not work regardless of the brilliance of our ideas; and that will be more a testament of our incapacity than of the ineffectiveness of the tool.

As an afterthought. It is interesting that the study did not consider if a policy brief with proper and robust research would have been more influential than one with questionable research -even if it had a clear message, was written by someone who is well-known, was nicely designed and accurately targeted. Now THAT would have been interesting.

The response to my post has been mixed on the ebpdn discussion list. 3ie, IDS, and others have defended the paper and the innovative effort of the researchers:

Kirsty Newman:

On Enrique’s question about whether it told us anything new, I agree that the findings support my pre-existing suspicion BUT remember that not everyone thinks like we do! There are plenty of examples of research communication strategies where the final objective is the production of a policy brief (or creation of a portal or holding of a seminar or whatever). This research is important evidence which helps build a case that if you seek to influence policy, you need to do more.

Maren Duvendack from ODI:

I have taken quite an interest in the 3ie-IDS-Norad study as I actually think that it is a pretty good idea! Looking at the influence of research on policy is pretty tricky (that’s essentially what the study is trying to do). Mainly qualitative tools are used in this context and many people (mainly economists!) find this very frustrating and crave some sort of quantification of the policy influence of research and the study does exactly this! Quantifying policy influence of research is not new, some people (mainly economists again!) tried calculating internal rates of return for example but that’s pretty flawed too.

Anyway, this is just another RCT and given their limitations one should always take their results with a pinch of salt!

But there have also been some voice supporting my critique:

Nick von Behr from behroutcomes.co.uk

As a bit of an outsider to Development I agree with everything that Enrique says in terms of policy influence. Better to spend the money on actually doing it through trial and error rather than taking a hugely scientific approach, laudable though the efforts have been. And as he says what matters is the balance of research evidence that actually supports/contradicts a new policy direction in whichever field (mine is education).

And from an actual communications practitioner like Maryam Mohsin also at ODI:

As a long time comms professional I find it a bit bizarre that so much time, resource and effort is being invested into applying RCTs to test how we can influence policy. Are we really trying to find a one size fits all way to communicate? Anyone with a bit of common sense will tell you how far that will get you in the complicated world of policy influence. And any communications professional who needed this RCT to tell them the key messages it contains should be fired immediately (I say in half jest – but I do wonder who this study is aimed at).

I simply do not believe policy influence is an exact science, it’s based on capacity, dialogue, strategy, pro-activeness and reflection. I think we need to stop trying to find ways of putting square pegs into round holes and shift away from placing emphasis on the tools, and move towards placing more emphasis on the strategic use of these tools.

We play a small part in the whole process of policy making, through to influence. We need to humbly recognise this and spend more time looking at what we do have the capacity to contribute towards and focus on being strategic in our attempt to give ourselves a better chance of success

This point made by Maryam echoes a follow up email I sent to the community in which I suggested that:

I see no future in this line of questioning. Sure, it is fun and I can see why a researcher would like to do it -but it is, in my view, useless and dangerous. I would encourage us instead to support organisations to reflect on what they do and how they do it. Invest in people. See Laura zommer’s fantastic arguments for inspiration. But don’t do what she says, follow her thought process and arrive at your own conclusions. Involve your communicators in strategic discussions. Involve your researchers in thinking about how they communicate and what influences change in their own policy communities. Come together in exercises like these supported by GDNet or the richer cases that this network supported in Latin America and Africa. THINK about it, don’t just wait for proof. There will be no proof. The IDS study’ caveats are way too large and significant to take any of its conclusions seriously. The best way forward is more and richer critical reflection (not just descriptive cases of successes).

How can we make research communications stickier? Reflections from the Institute of Development Studies

By James Georgalakis, Communications Manager, Institute of Development Studies

In line with some of the implications presented by Nick Scott on digital disruption, James Georgalakis analyses what makes some research stickier than other. Which were the stickiest stories on the Institute of Development Studies’ (IDS) website in 2011? Is Search Engine Optimisation (SEO) cheating? And can we improve research impact by dropping in the names of royals and celebrities? Just some of the questions James tries to answer in this blog.

 

News or blogs on development research are highly unlikely to compete with viral YouTube sensations involving celebrities or pets that reach millions in hours. But a quick scan through the top ten most viewed news stories from the IDS web site from 2011 still tells us a lot about what it is that makes some research super sticky.

In fact, some IDS news stories were so sticky they came top of the 2011 list even though they were not even posted in 2011. A story about Ian Scoones’ book on Zimbabwe’s land reform posted in November 2010 came in at number one followed closely by Andy Sumner’s New Bottom Billion – also a 2010 story. What did these stories have that others did not?

Well of course exciting, original research helps. And both of these examples tick that box. Scoones’ revelation that Zimbabwe’s land reforms were not so bad after all quickly ignited a lively media debate that has just run and run. It was still running a year on with  this BBC Radio 4 documentary looking closely at his claims. Sumner’s startling findings on the changing nature of poverty also ignited debate in the media and academia and stoked up the blogosphere. This is why traffic just keeps on finding its way back to the original web stories.

However, not all of our top ten scorers from this year can claim to have promised robust research with counterintuitive findings. Consider our official number one story from 2011 (the most page views of all the stories actually posted in that year). It is news of a special Robert Chambers conference with links to all the related materials. This is not hard news and it is not particularly surprising or controversial. Here we have the awesome stickiness of a big, well stellar, name in the development research community. How often is Chambers’ name googled? How big are the networks of people who have shared the link with one another and will have flocked to access this content? How quickly did news of his conference spread across the blogosphere with links back to the original content?

Robert Chambers was not the only big name to make the IDS top ten. At number two we have Kate Middleton and Prince William who, according to the IDS headline from last April, ‘got engaged in Africa’s land grab hotspot’. Yes that’s right, we took the convergence of a royal wedding, the happy couple’s obscure connection with land grabs and a recent IDS hosted academic conference on land grabbing to produce something really sticky. Just think how many royal wedding fans inadvertently became informed on the land grab issue. Such are the rewards of working in research communications. To be fair to the Future Agricultures Consortium, who organised the Land Grab conference, this is a pretty sticky topic even without royal endorsement. It features in no less than three of our top ten spots from 2011.

The tactic of using highly topical and sticky words or names in your headline is known as search engine optimisation (SEO) or cheating, and it works. Of course topicality is itself one of the greatest assets of all. Just take the Arab uprisings or the Horn of Africa crisis which both, perhaps not surprisingly, made the top ten. However, as any newspaper sub-editor will tell you: If all else fails you just need a great headline. What else can explain a story about a podcast from an IDS Sussex Development Lecture coming in at number eight? Don’t get me wrong it was a really great lecture but a story about a podcast! It was titled: ‘Decline of the NGO Empire – where next for international development organisations?’ Pretty good huh? Yes, it was one of mine. Also very nice, (and not one of mine) and slipping in at number ten is: ‘Taking the scare out of scarcity – how faulty economic models keep the poor poor’. See what we did there?

Now I know the whole subject of traffic drivers and SEO is way more complicated than this. Our study of the IDS top ten has severe limitations. Clearly content published early in the year has an advantage over the stories that came after and some of our high scorers were boosted by extended periods on the IDS home page.  Plus it is about push as well as pull and for that we have to start thinking about the role of social media, where content is positioned on the site and a multitude of other factors. This is another blog for another day, but my point is this: If you want lots of people to find the story about your research irresistible consider this 2011 IDS top ten. Methodologically sound and original research is great, but a crowd pulling name, timeliness, a big surprise and a great title all helps a lot too.

If you want to know more about this stickiness business I suggest you read Made to Stick: Why Some Ideas Survive and Others Die, by Chip and Dan Heath which should be a compulsory text for all those working in research communications.

Happy holidays

And here is that 2011 IDS website news stories top 10 in full (listed by order of unique page views to date):

  1. Revolutions in development reflecting forwards from the work of Robert chambers
  2. Prince William and Kate Middleton engaged in Africa’s land grab hotspot
  3. Bellagio Initiative starts an IDS led global debate exploring the future of philanthropy and international development
  4. Debating the global land grab
  5. How a citizen led approach can transform aid to governance
  6. The East African food crisis beyond drought and food aid
  7. Experts warn of new scramble for Africa at an international conference on land grabbing
  8. Decline of the ngo empire – where next for international development organisations?
  9. The people revolt why we got it wrong for the Arab world
  10. Taking the scare out of scarcity – how faulty economic models keep the poor poor

Zoom zoom zoom, capoeira mata um: communications in the age of austerity

‘Capoeira mata um’, or perhaps more accurately, ‘capoeira foi morto por um’ – at least on one sunny day last summer.

By Jeff Knezovich, Policy Influence and Research Uptake Manager for the Future Health Systems Research Programme Consortium*

Now if you’re like 99.9% of readers of this blog, you’re probably wondering a) why Enrique let me do a guest blog, and b) what in the world Brazilian Portuguese has to do with austerity communications. Let me explain.

Zum zum zum’ is a popular song in the capoeira circuits. Indeed it is so popular that you might recognise it from certain Mazda adverts. Zoom zoom.

The first line literally means ‘capoeira kills one’, but that’s not what this story is about. It’s about a time where ‘capoeira was killed by one’, how that has changed the development communications landscape, and what lessons development policy entrepreneurs can draw from the famed Brazilian martial art/dance.

When the Conservative-led coalition came into government in the UK last May, one of the first targets in their crosshairs was ‘profligate’ Labour spending, which they argued had left the country in dire economic straits. ‘Communications’, synonymous with spending, quickly became a dirty word across Whitehall. And, although they promised to protect – and even increase – aid spending, the Secretary of State for International Development, Andrew Mitchell, made value for money of British aid a clear priority. Among other things, that meant cuts to a cherished Labour objective: making the argument for aid to the British public.

Indeed, who could argue with cutting aid funds to a ‘Brazilian dance troupe’ in Hackney (a neighbourhood in East London)? In one of his first ministerial speeches, Mitchell made clear that these sorts of activities would no longer be tolerated and that the aid argument would be won not on explaining it to audiences at home but by improving lives abroad.

By mid-2010 the UK, not just the newly rebranded UKaid, entered an age of ‘austerity communications’. Government websites were among the first to be scrutinised. As it turned out, the UK Trade and Investment website, in what is frankly a crude measure, cost the government nearly £12/visitor, and that’s discounting staff and operating costs. Hardly value for money, by any definition.

Such costs called for a rationalisation of government websites, an edict that has trickled down through the ranks of DFID. In the most recent advice given to its large portfolio of research consortia, it was suggested that no programme should have a standalone website. Recommendations have also emerged that no money should be going to promoting large programmes as brands independent from their host organisations, and that hosting events that cost over £20,000 require cabinet-level approval.

While the value for money of these arbitrary rules is dubious at best, the push for austerity communications should be welcomed by development researchers, research communicators, knowledge intermediaries and policy entrepreneurs alike. Just as it is an incorrect assumption that less polished looking communication activities are cheaper (just ask the 2012 Olympic committee), it is equally untrue that communication has to be expensive. An unhealthy economy has emerged in the research communication field: from expensive and self-indulgent websites to exorbitant per diems for participation in events (which may soon be considered bribery in certain circumstance under new UK legislation) to paying for media placement.

My friend and former colleague, Nick Scott from ODI has spoken widely about free and low cost online tools that can help establish and bolster an online presence, so I will instead broaden the discussion in the rest of this post to how communications has the opportunity to be more effective in these tight times.

Ironically, capoiera’s existence today is a shining example of massive impact with limited resources. Capoeira emerged from slaves of African origin working the sugarcane plantations of Brazil in the 1600s. As a martial art, slaves used it for self-protection, to escape and to defend Quilombos (informal settlements of escaped slaves and others living outside the law). As capoeira was a clear threat to the Portuguese slave owners, it was outlawed, forcing capoeiristas to disguising the practice as a form of traditional dance. And perhaps at a most basic level, this clandestine approach of obfuscating traditional approaches to research communications will be necessary, but only when they are the most appropriate techniques to reach an objective.

Ultimately I hope that these new rules force us to change rather than conceal. And here, capoiera offers more lessons to inform an innovative approach to research communications.

There are several styles of capoiera, the two most popular being capoira regional (pronounced ‘hey-shu-nal) and capoiral angola. Capoeira regional is the newer, flashier side of capoeira, with rodas usually going at a quicker pace and with more jumps, spins and kicks. The more traditional angola style is comparatively slow place and low to the ground, with combatants usually keeping at least one hand touching the ground at all times. Both styles are popular, but capoeira angola is considered the more difficult. It is a reflective and strategic style and requires greater control – consider it the chess of the martial arts world. And perhaps these two styles represent the difference between research communications and marketing as it was promoted under the Labour government (capoiera regional) and the era of austerity communications  (capoiera angola).

There are a few principles operating in capoeira angola: 1) conserve energy and maintain endurance; 2) use the slow pace to develop an understanding of the opponent and use that understanding to defeat her/him; 3) exploit opportunities and make every attack count. Development communications would do well to abide by these principles.

1)    Conserve energy and maintain endurance: As Enrique has noted elsewhere, think tanks and research organisations that chase visibility at the cost of substantive research and influence do so at their own peril. The fact is that we are operating with finite resources and there is an opportunity cost associated with pursing any given engagement activity. To that end, we must recognise that substantive influence does not happen overnight. We need to be prepared to invest in long term strategies that focus on building relationships and trust – neither of which is founded on glossy brochures.

2)    Understand the opponent: At its least, austerity communications should give us time to pause and reflect on how policy influence and research uptake actually occur in our individual contexts. Maybe getting an article into a journal with the highest impact factor isn’t going to change practice on the ground. Maybe the long research publication isn’t the best choice in Cambodia, where most business and politics is transacted verbally. Maybe the flashy website that woos donors isn’t the right option to reach researchers in the D.R. Congo where internet penetration is notoriously low.

Additionally, a good understanding of our audience allows us to extend a ‘being there’ strategy from the web to other forms of communication. Beyond thinking of where in the web world your audiences are spending their time, also think through: What publications are your target audiences already reading? What media do they already engage with? What events are they already attending? Spending effort getting into these spaces may be much more valuable than simply creating more of your own spaces and spending resources to market them. Enrique’s recent post on ‘confirmation bias’ should be a good reminder of this – people are predisposed to agree with evidence from a source they already trust.

3)    Make every attack count: Value for money doesn’t necessarily mean spending less money, it means spending it wisely. Instead of a throw-everything-at-the-wall-and-see-what-sticks approach to communication (which can be particularly valuable when working in complex environments as long as there are in-built learning mechanisms), under austerity communications we will likely need to be more selective in our communications activities. So when an opportunity does arise, and we do think that it is the right intervention for the right objective, go ‘all in’ and put significant resources behind it.

In a review of DFID’s recommendation to spend 10% of funds on communication activities for certain types of programmes that Enrique and I both participated in a few years ago, we found that some programmes were taking the advice literally and cascading the 10% funding throughout all of its interventions – but some research is more communicable than other research. Austerity communications will require a greater investment in horizon scanning (and tools that facilitate this), and then taking every advantage of opportunities as and when they do arise.

*[This is the first of I hope many more contributions from practitioners and experts in the field of think tank management, communications, funding, etc. If you would like to recommend someone please contact Enrique Mendizabal on enrique@mendizabal.co.uk]

more on how to present research

Nick Scott’s and James Georgalaki’s comments to my post on how to organise and present research are worth sharing beyond the comments section of my post so let me copy-paste a few of their arguments here:

Nick (ODI’s online communications manager):

“Websites for think tanks [are] the place that all other communications activities come together. Most know what they are looking for when they arrive on a site, and are there to find it. That is why you need to be able to organise information by a number of competing taxonomies to give them the greatest chance of finding it; all the while trying to make those taxonomies user-focused and minimise confusion between them in the user.

“The trick is to achieve a balance of all the ‘types’ of site you have [to] offer all to those who want to find something particular,and find ways to highlight flagship reports, news, information about the organisation and any other taxonomies to users too.

“you need to be able to offer all those things all the way through a site, because the vast majority of your users won’t arrive on a home page, they’ll arrive on a page two or three levels down… [because] one of the most effective ways of reaching people … working out how …  you’re going to get them to see the information in the course of their travels around the internet in the first place…

What is your search engine optimisation strategy to get your information top on Google? How are you ensuring that your research findings are linked to from all the top online sources for each specific sector you work on? How do you get an email to a key player, and more importantly get them to read?

“It is quite a challenge and I’m not convinced that any of the organisation [in the blog] have made much progress in making the online space work for them as a proactive route for influence, rather than a reactive one that allows their information to be found and used when needed.

“In response to your question on how to present clear stories online, there are numerous ways to do it, but without some manual synthesis and a clear and focused subject it is difficult. Blogs can be great at this, as can events, podcasts, presentation. I don’t think it would be easy to achieve a clear story by just listing a set of resources, even though that is much easier to do. The most relevant attempt at this for ODI is our ODI on… pages, which we create at times of international events or the like, and where the summary should provide a synthesis of some of the key areas highlighted within the list of documents.”

James (IDS’s communications manager:

“I think you get an even starker demonstration of [the difficulty of presenting research] if you look at organisations’ Annual Reports. Here you will see some present a detailed description of themselves and organise the publication around the structure of their organisation, whilst others use it to report on key acheivements and present their brand or vision. Certainly here at IDS we did the former but are attempting this year to move to the latter.

“The, all that we do approach, is also recognising that our websites – or at least the home pages – have a broader set of audiences. At IDS our website is a marketing tool which promotes our courses, other services such as Knowledge Services as well as providing a platform for discourse on our research.

“Our home page is more concerned with reducing bounce rates and supporting our SEO strategy than anything else.

“That is not to say that we do not also struggle with the externalising internal process issues on our website. There is more debate here for instance about the search by subject research categories than anything else.”

These are all very important points that illustrate the complexity of their job. Some things come to mind (which are not just relevant for websites -as James suggests above):

  • ODI, IDS and CGD have much more broader publics -they are after all dealing with global issues and are attempting to reach publics around the world. ODI and IDS and other think tanks in developing countries have a contracting business model (and ‘sell’ a range of goods and services) and so must attempt to present them.
  • Think tanks targeting a particular public -say the British, Ecuadorian, Indian, etc.- and more so those with a specific sector focus do not need to market themselves so widely and their front pages can therefore focus their efforts on a particular report, event or message.
  • Something similar could be said about funding types: core funding may reduce the need to market ones services, proyect funding makes it so much more important.

I’ve been trying to look for a page to illustrate this (I am sure I have seen one but cannot find it now): another way of presenting information would be to outline our publics more explicitly. That is, have versions of the site or report for policymakers, researchers, NGOs, activists, the general public. Brookings allows something like this by encouraging the user to create a portfolio.

What do you think?

on how to organise and present a think tank’s research

I had a very interesting conversation with Andrea Ordonez from Grupo FARO today. We were talking about how to organise the research programmes of the think tank and it occurred to us that there is often a tension between how research is organised internally and how it is presented -mainly through a website. When I was at ODI this was a constant struggle: hence the lists of programmes, themes and regions; never mind the long list of resources.

The reality of course is that work at ODI is not organised by any of these categories.

Internally, research needs to be organised to maximise quality and efficiency -it must help to manage, work across the organisation, win business, attract new staff, etc. Externally, it must be organised to influence  (inform or educate) its public. Two different audiences and objectives.

So it should be easier if one separate both -internal about management, external about influence- and recognise that there does not need to be an automatic link between both. All research and analysis is not worth being published -nor, in my opinion, should we publish different types of outputs next to each other as if they were comparable: books, journal articles, reports, opinion pieces and blogs all have their place.

Furthermore, external organisation of research (the presentation and communication of research) ought to present a coherent policy message. How else could someone make a decision?

So how to present research? I’ve been looking at some front pages (which is one way of presenting research outside of the organisation) and have found some approaches (but note that there are many overlaps):

Does anyone else have examples of these? Or any other? Maybe favourite think tank websites that may be presented as best practices?

Please send your recommendations.

Another year, another ranking of think tanks (and surprise surprise, Brookings is still the best)

I’ll accept that James McGann’s effort to identify and rank all the think tanks in the world has some positive outcomes. First of all, it has people talking about think tanks -and some think tanks are even becoming aware that there is a debate out there about themselves. Second… no, that is it. [Also have a look at Goran Buldioski’s blog on the same subject]

I am still of the opinion that going beyond the counting and study of individual think tanks (and their immediate systems) is useless and misleading. Here are five reasons why I do not support this ranking, and then a longer semi-rant at the document.

  1. Think tanks cannot be de-linked from their political, social and economic environment; since think tanks define themselves in relation to the other players in the system. Brookings cannot be described without references to US bipartisanship -when we say independent research in the US we mean independent of either party (as well as of other interests). But independent means something entirely different in China, India, Brazil, or Argentina. Global and regional rankings are therefore unhelpful when the focus of think tanks is local (not local as in of this town or neighbourhood but of their direct interactions).
  2. The list is too diverse to be relevant. The definition of ‘think tanks’ has improved since I last commented on it to include politics. But he has now included organisations some that cannot be possibly compared with the rest. Let’s put it this way: if I define a mobile phone as a device that allows me to make phone calls while on the move I could be tempted to include laptops (after all I can make Skype calls ‘on the move’) but I wouldn’t because it would be confusing and unhelpful. A mobile is one thing and a laptop is another. Maybe they will do things that the other can also do but that does not make them the same thing. Amnesty International, Human Rights Watch, Transparency International and the various foundations (funders rather than researchers) included …. how useful is it to compare them with IPAR in Rwanda or GRADE in Peru?
  3. It is still based on perception rather than thoughtful analysis. Thoughtful analysis would have required the development of a database with answers to all the questions or criteria presented in page 56. These are good questions, but the nominators were not asked to provide answers to these, only to use them to think about their nominations. This means that it is all about presentation rather than content: still a popularity contest among people who clearly cannot know about every context and must therefore rely on what is accessible to them (this is obvious when one realises that most of the top non-US think tanks are either focusing on (or working under the banner of) international development, security and foreign affairs). The kind of analysis that I am attempting and that Goran Buldioski, for instance, is undertaking in Eastern Europe is absent.
  4. A ranking must have a clear definition of what the top spot implies: top 25 by revenue, by number of staff, by number of publications, by happiness of their staff, etc. It is the same as with sport: Usain Bolt is the fastest sprinter. The Ballon d’Or on the other hand is a perception based award given to the best football player according to the votes of coaches and captains of international teams, as well as journalists from around the world. So you either define why one wins or you define who votes; but you cannot keep both unclear or hidden.
  5. It is dangerous. It creates incentives towards investing in profile raising and visibility rather than focusing on research and research capacity. The director of a think tank that is not on the list emailed me, worried about their absence, what should we do? Given that they are one of the most influential think tanks in their country, undertake research of the highest quality and are running groundbreaking and innovative initiatives (copied all over the world) my answer is: nothing. And those who make it to the list because they are popular rather than good are incentivised against doing anything about it because they may believe that the list confers them credibility.

My recommendation (if some sort of ranking is what we want) then continues to be the promotion of national think tank awards like the one promoted by Prospect Magazine. It is a shame, really, because this project has the potential to collect fantastic data on think tanks unfortunately because of the focus on the ranking a huge opportunity is being lost.

On the report itself, here are some preliminary comments after a single read (I promise to give it another go):

The first thing I notice is that top to the list are Brookings and Chatham House. I often go to their websites and find out a bit more about them and see that, yes, they have fantastic research and wide range of products and are clearly at the top of their game. And when I can I go to Chatham House events. So far so good, I guess. But then, second and third are Amnesty International and Transparency International. I know these organisations well. They are quite active in my country (Peru) but they are international campaigning NGOs, not think tanks. Transparency International participates in electoral processes as an observer. Is this the role of a think tank? Amnesty international campaigns for human rights and against their violations. I don’t think that researchers lobbying for more funds and freedom for think tanks in many developing countries would like their governments to think that this would mean more space for TI and AI to operate there too. Apples and Oranges?

Then I remember that the winner of Prospect Magazine’s 2010 Think Tanks Award was the Institute for Government; I check the top non-US think tanks but find that there are other UK think tanks in the list and the Institute for Government is nowhere to be found. In fact, it is not mentioned in the whole document. That is odd but, OK, not all rankings have to agree. What about Policy Exchange? Policy Exchange was set up by the supporters and members of the Conservative Party and was instrumental in the development of the ideas that shaped the arguments that won the 2010 election and that are guiding the new government’s policy agenda. There is a fantastic indirect account of this in Peter Snowdon’s book: Back from the Brink. No, the Policy Exchange is not listed either.

To make sure I am not missing anything I jump to the table for Europe (page 31) but no luck. They are not there. But the Overseas Development Institute is.

Now, as much as I like ODI, I am sure that it is not more influential than Policy Exchange. So, wait a minute, maybe this ranking is not about influence but about worth..?… about value? reputation? is it about finding the ones more capable of speaking truth to power? But why then have an index every year? What can change year on year to get a new one into the ranking? An annual index suggest that think tanks quality can change in a short period of time and therefore it is possible for an unknown organisation to make it to the top is the happen to do all the right things. Is it possible in this ranking? CGD did it more or less and on the basis of a good combination of research and communications. But is it possible for think tanks in small countries focusing on local issues? And is it really a worthy end?

The more I see Chatham House and other security and international relations think tanks the more it feels as if the theme of this year’s ranking is foreign policy or international development -maybe that is what this year was about. Or maybe this is what the annual ranking should be about: focus on a single theme so that more and better analysis can be done for each think tank.

Nevermind, let’s get back to it. On to Latin America, which I know a bit. The list includes the Centro de Estudios Publicos (CEP) from Chile, the Centro de Implementacion de Politicas Publicas para la Equidad y el Crecimiento (CIPPEC) in Argentina, the Instituto Libertad y Democracia (ILD) in Peru (which by the way is on both 15 and 24), and CEPAL (the UN’s Economic Commission for Latin America and the Caribbean, or ECLAC in English). This is interesting. CEPAL is the only truly regional policy research centre in the list -but it is a UN body. CEP and CIPPEC are clearly focused on their own countries -and they are certainly influential there but not in my country, Peru. And ILD was influential (granted it has been one of the most influential organisations int he world led by their director Hernando de Soto) but it almost has no public presence in Peru and cannot be really compared with other Peruvian and Latin American think tanks if one quickly browses through their work and publications. ILD is a fantastic analysis based consultancy working across the developing world on the basis of some research done in the 1980s. If they make it to the top of the list it is far more interesting to find out why this is the case rather than their place in the ranking: is it because this is what policymakers value, or were the respondents from Africa or Asia where they do most of their work?

In any case, policy in Peru is influenced by (among the think tanks) CIUP, GRADE (which is mentioned), IEP, and others that are not on the list. This is a perfect example of visibility: is it sometimes my impression that GRADE is quite successful in reaching audiences in DC and London and is therefore well known globally; while IEP and CIUP might be more focused on domestic policy debates and hence less well known beyond the country or region -or certain research communities. This probably reflects their origins, mandate and business models. So even within a country, comparison is difficult. Who is to say though whether one is better than the other based on their choice of audiences? [This section has been edited; see comments below.]

Back to Latin America (and for that matter, Europe). In Latin America there isn’t a regional government so what is the point of a regional ranking. So what is the top think tank is Brazilian? Is it informing the Chilean government? Is it valuable for Colombia? Maybe in Europe ‘European think tanks’ make more sense but then is this why domestically focused think tanks are not mentioned? Clearly, international reviewers would not know who are the movers and shakers of Peruvian, British, or Spanish policies. (Again, a point in favour of national awards.)

So maybe the regional focus has little to do with where the think tanks do their influencing and more with quite simply where they are based. But if this is the case then once again we’d be separating think tanks from their context -and this is not right.

And now on to Africa. This list looks a bit messy, to say the least. The first 7 are from South Africa (no surprises there). But number 8 is a regional research network made up of researchers based in think tanks across Africa -I’d like to call it a think tank but I am not sure how it compares with the others. And then it lists a few organisations which can hardly be called organisations at all and are only popular or known because they are among the only ones in their countries. Others are in the process of getting there; but are not there yet. A tiny bit of analysis would have provided sufficient information to disqualify them as worthy of any ranking; and to identify many others who may be more worthy of a mention.

Anyway, what is the point of saying that organisation xyz is among the top 25 in Africa? How does it compare with the Latin American ones, for instance?

What happened with the debate on think tanks in South Asia? I’ve been avidly following a great debate on Indian newspapers on think tanks that would suggest a fantastic opportunity for a study such as this one. And how useful is it to compare them with think tanks in East and Southeast Asia? In fact, how useful is it to compare think tanks in China or Vietnam with those in Japan, Indonesia and South Korea? Our overview study on think tanks and politics in the region showed foundational differences between them that merit more rather than less national focus.

The lack of analysis is telling of the limits of this type of research. A country or region focused study (rather than ranking) would have been much richer and useful.

The thematic rankings are also quite interesting. The fact still remains that one cannot separate theme from politics -and politics are always local.

I would have loved an explanation for Chatham House coming ahead of IDS in the ranking on International Development. Chatham House if by far a better think tank than ODI and IDS on foreign policy (and let’s face it they are a fantastic think tank in general and its contribution to international development debate is invaluable) but given that international development policy is still largely dominated by DFID and that DFID’s research programme is dominated by IDS and ODI (and not Chatham House) and that IDS alumni roam the corridors of DFID I cannot understand the ranking. More explanation is needed, please.

Also, why is Fundacao Getulio Vargas included in this table? They are not focused on International Development policy, their focus is on just policies; the international development prefix is added by ‘northern’ organisations to describe policies for or of developing countries. FGT deal with economic, business and legal research for the development of Brazil. How is this different from the research done by Brookings or IPPR for the development of the US and the UK respectively? (patronising?)

Also FGV is included at the foundation level not at the level of its centres of programmes, however, the Pew Research Center rather than the Pew Charitable Trusts is included. Why? I would suggest that it has to do with the narrow and shallow focus on a global index instead of a desire to understand the richness of the histories of these organisations.

Then it gets confusing -think tanks are in more than one category but in totally different levels and others which one would expect to find are gone. Yes, this is all possible, as most think tanks would be good in one thing and not in all; but Chatham House, for example, is the top UK think tank in most list but behind the International Institute for Strategic Studies when it comes to their core area of expertise: foreign policy and security. This makes no sense.

The potentially most useful list (domestic economic policy) ends up being a US focused one. This further illustrates the limitations of a global ranking and its bias towards international development and foreign affairs think tanks that are more easily identifiable in the blogosphere or more popular communication channels than domestically focused ones.

Then the special categories: most innovative policy idea -great category but what have they been nominated for? what was the idea that got Brookings to the top? Again, another missed opportunity to provide intelligent insights into the rich and complex reality of think tanks. The same goes for the outstanding policy research programme category. Which programme got ODI the 15th place? ODI has quite a lot of programmes -and also projects that we call programmes because they are larger than the usual small projects we run. So which one was it? The Africa Power and Politics Programme? The Research and Policy in Development Programme? The Humanitarian Policy Group’s Integrated Programme? The Chronic Poverty Research Centre? It is important to know because some of these are delivered with other organisations so ODI could not take all the credit.

I got bored a bit and jumped over some tables until I got to the best government affiliated think tank -WBI? Nice to know that the WB is considered a government. If the WB is a ‘government’ would the UN not be one too? (UNU-WIDER and CEPAL are in the other tables.) What about think tanks entirely (or almost entirely) funded by their governments or the international cooperation?

And then, Party Affiliated think tanks -which is an important addition to any work on think tanks. This merits an entirely different post. What does affiliated mean? Does this include Conservative think tanks in the United States like Heritage or the Conservative Party’s Central Research Department? And wouldn’t CASS and VASS (the Vietnamese equivalent of CASS) be part of this category? After all, they are affiliated to the Communist Party and Chinese and Vietnamese line ministries have their own think tanks.

I don’t want this to be a totally anti-Go-to-Think-Tank-of-the-Year rant. As I said before, the ranking has created an opportunity for debate and discussion on think tanks and this is good. But this ought to lead to a proper discussion about think tanks, the roles they play and how they may be able to contribute to their contexts (local and/or global).

The list of questions and criteria in page 56 is the best part of the document and an important contribution to the think tanks debate. It provides a guideline of sorts to study think tanks in greater detail and to promote a more intelligent debate. Focusing on the list and the ranking, I think, robs us of James McGann’s and his team’s undeniable capacity to do this and leave us with a bitchy Oscar nominations season for researchers.

on measuring the cost effectiveness of think tanks

I am no a fan of measuring the value of think tanks by looking at website hits and press mentions but that does not mean that these do not help to tell a part of the story. This press release by CEPR, based on a study by FAIR: Right Ebbs, Left Gains as Media ‘Experts’, of the cost effectiveness of the most widely cited US think tanks provides an interesting take on this.

I guess that it makes sense as a comparator between similar organisations -at least between organisations that are playing under more or less the same rules; as is the case in the US think tank scene.

This does not mean that this type of analysis would work across borders -comparing, say the US with he UK, or countries within Latin America or Africa.

For instance, this morning, Nick Scott, ODI’s online communications manager sent us this Books Google Ngram comparing ODI with the Institute of Development Studies, the Center for Global Development and Brookings. It is not only an unfair comparison between the UK and the US; but also between the international development focus of the first three and the more general focus of the latter.

Independent thinking, but at What Price? – Brookings Institution

Peter Singer hits the nail on the head with an article on the ethics of think tanks and the threats that certain funding sources may create for think tanks’ independence.

On Saturday I blogged about the risks of foreign funding to think tanks in India.

Singer argues that:

Thinktankdom is a field that lacks any universal code of ethics, ombudsmen to hold people accountable, a professional association to regulate, etc. Even more, it is filled with people who are happy to speak about everything under the sun –that is, except their own field and the dirty little part that money plays in it.

And believes that:

thinktankers should not take on private consulting contracts with firms they might research and comment upon in their public work.

John Blundell, former chief of the IEA agrees with this assessment. And, I must say, that so do I.

However, this is not always possible. The reality of many think tanks in developing countries -and certainly in the poorest and most aid dependent countries- is one where the funders of research and influence are a few bilateral and multilateral donors; and more recently global foundations. Even in the developed world, international development think tanks (IDS, ODI, DIE, ECDPM, FRIDE, and other smaller non-for profit and for-profit outfits who portray themselves as source of independent sources of expertise) are almost entirely dependent of bilateral donor funds -even though their research and influence is also focused on these donors.

In the aid sector, funding for think tanks tends to come in the form of consultancy contracts rather than research grants or core funding. This creates, according to Singer’s assessment serious conflicts of interest and challenges the very essence of think tanks’ functions.

Singer’s recommendation, that think tanks and researchers should fully disclose the source of their funds -certainly more so when a particular study has been commissioned by a client- should be taken seriously in these contexts.

Transparency can only been a good thing.



Follow

Get every new post delivered to your Inbox.

Join 5,445 other followers