“A policy brief is a piece of paper. It doesn’t DO anything on its own”

29 August 2012

Quite some time ago, Jeff Knezovich reported on a study that was due to be published: Should think tanks write policy briefs? In this post he wrote something that we should all keep in mind:

A policy brief is a piece of paper. It doesn’t DO anything, and is therefore unlikely to have impact on its own.

This is something I try to remind people of all the time when discussing policy influence and research uptake. In fact, most communication outputs by themselves probably aren’t very impactful. It is a shame, then, that these outputs tend to be listed as deliverables in contracts with funders and thus tend to become viewed as an ‘end’ rather than a ‘means to an end’.

Well, the paper is out: Can a policy brief be an effective tool for policy influence?

3ie and the Institute of Development Studies (IDS), in collaboration with Norwegian Agency for Development Cooperation (Norad), explored the effectiveness of a policy brief for influencing readers’ beliefs and prompting them to act.

A multi-armed randomised control design was used to find answers to three research questions: Do policy briefs influence readers? Does the presence of an op-ed type commentary within the brief lead to more or less influence? and Does it matter if the commentary is assigned to a well-known name in the field?

In response I posted a rather long email on the ebpdn discussion board [this board no longer exists]. I repost it below with some edits:

I do not want to always be the one to say the obvious (and this probably won’t make me any new friends, I am afraid) but was all this really necessary just to conclude that:

  • Policy briefs have to have clear messages;
  • That people who are well-known and respected are more likely to be listened to than people that nobody knows about;
  • That design matters; and
  • That they should be targeted at the people who matter?

As Jeff’s blog shows IDS has known all this for quite some time. Even without an RCT, I am sure that its communications and outreach staff knew what they were doing. All of these ‘lessons’ are also key components of ODI’s research communication workshops (which go all the way back to 2009!) and the communications team there have had how to guides for much longer that say all of this and much more. CIPPEC and other think tanks working on the subject have been saying and doing this too.

RPC advocates might say that before this study we did not really KNOW any of this; we just thought we knew. But, did we really have to KNOW this?  (In fact, as I will argue below I do not think the study is robust enough for us to really KNOW if policy briefs work, why and when.) Surely there are other things that are more important and that we do not know about for which RCTs can be really useful… but all of this for a simple piece of paper?

I had a few comments to make on the paper:

First, the theory of change for a policy brief presented in the study is a perfect example of a very important confusion. There is a mistaken assumption made that evidence based policy and policy influence mean the same thing. This, as Emma Broadbent showed in her series of papers on the political economy of research uptake in four policy issues in Sierra Leone, Uganda, Ghana and Zambia, and Kirsty Newman latter blogged about, is just not the case. Evidence based or informed policy and policy influence are two very different things.

Second, there seems to be an assumption that, as in medicine, the reader-patient of the policy brief has no other influences-treatment. It assumes that there are no other forms of communication affecting his or her ideas. If the intention was to test the effectiveness of a policy brief surely the treatment should have included other types of communication channels and tools to see which one had the most significant effect (on its own or in combination which others -but which others? There is no standard mix of communication tools that is used all the time by all organisations in all contexts). Now, that would have been truly interesting. But then how does one create a comparable situation among all the cases? A control? Some people would have had to be isolated from all other sources of information for the duration of the study.

In a way they did include another tool, an opinion piece, as part of the possible mix. But since they did not just present the opinion on its own, how do they know then that the opinion of the respected researchers would not have been enough to influence the patient? Again, that would have been an interesting finding.

Third, the choice of topic of the policy brief presents another problem. The thing is, well, nobody really cares. Of course nutrition is an important issue but out of all the people who responded, not all work on it And even for those that did, how ideological is this issue for them? For researchers in developing countries the real challenge is not in the more technocratic issues like these. In these case the absence of change even if the evidence is clear is more likely down to lack of motivation or capacity to change and implement recommendations than poor communication of the findings. The challenge is in the ideological issues. Those value-heavy policy choices that the developed world deals with on a daily basis but that international development community tends to dismiss in the developing world. It would have been more accurate to test the effects of policy briefs that dealt with free trade agreements, subsidies, the benefits (or costs) of large mining projects or the privatisation of water? Now, that would have been interesting.

The paper is full of limitations; and most are described at length in it. A key issue that comes up in the analysis but does not seem to merit a mention in the section on limitations is that the policy brief focuses on a specialist subject about which not many people know that much about. So really, all that should matter is if the people who do know about it change their mind or not. It makes little sense to see if someone who knew little or nothing about something changed their mind after reading about it. Of course they would. Policy briefs are meant to target informed people -interested, too. They are never expected to, on their own, convince someone of something they disagree with or to get someone who does not know anything about an issue to get exited about it.

So of course one would expect an increase in the number of people who say they believe what the brief says and that the strength of the evidence has increased after reading it if they did not know anything about it before. And of course people who already knew about it would not be likely to dramatically change their beliefs or opinions about what they already feel they know. But again, a policy brief is not there to change people’s minds but to inform them of a course of action. And in fact this is what the study found.

It seems to me that the study ought to have been clearer about what a policy brief is and is not for. And maybe it should have been clearer that  a policy brief is never published alone. If a policy brief is put out as a stand alone output then this should be seen as a failure in communication and so it does not seem appropriate to encourage research centres to do so by suggesting that all may be ok as long as they follow the recommendations of this paper.

The format of the policy brief is also important. The test used in the RCT is based on an IDS format -which I have no issues with (except that 4 pages may have been better than 3). But, as I found in a review of a RAPID project to help IDRC programmes in Latin America, Africa and Asia to develop policy briefs back in 2008 (if memory serves me right) different organisations have different views of what a policy brief ought to be. This is because different academic communities have different writing styles and expect different things from their researchers. Policymakers, who have been part of those communities at university, too, expect different things from researchers. So the 3 page (which by the way, is odd; ideally make them of 2, 4, 6 pages, even numbers so they can be easily printed and folded) policy brief may work alright in the UK but might not work in Vietnam or in Egypt. And the writing style used by the paper may excite some but baffle others.

An important question to ask about the effect of the tool is what it audiences did as a consequence of reading it. On this, nothing seems to be particularly new. The literature, and every communicator in the world knows that things that take more effort are less likely to be done by the recipient of a message. We all know that. In fact, that is the basis of Outcome Mapping’s Progress Markers: first reactions, then more active participation and engagement, and finally taking on the initiative and more transformative changes. Maybe they did an RCT we are not aware of.

Effort, of course, is linked to the power that people have; another ‘big’ finding of the paper. And power is likely to be linked to their education level and position in their organisations. So it is rather obvious that more powerful individuals will be more likely to act on the recommendations of the policy brief in ways that require resources and other people to do things for them.

Finally, the authors have something to say about the power of influential people/names. They argue that the reputation of the messenger can convince the reader. Well, in page 73 the authors mention a study by RAPID that contains a survey related to researchers’ view of the role of evidence in policy and the value of policy briefs; and they use its findings as undisputed fact. Unfortunately, they did not check the survey. If they had they would have found that the survey was far from representative of all researchers (I am sorry but 200 or so researchers from a an ODI/SciDev mail list is not representative of all researchers in the developing world) and so the correct way of expressing Jones’ and Walsh’s opinion (because that is all it can be said to be) is that half of the respondents to their survey (which was terribly biased to people we knew) were of the view that research communications are of poor quality and that policy briefs could help. So, certainly in this case, the authority effect worked here. Priceless.

In the end the paper does not tell us if a briefing paper works (without caveats), why it works and when it works. This is what one needs to know. Does it work best at the end of a study? To set the agenda? To advice on implementation? Does it work best when combined with blogs, opinions, videos, a working paper, a good presentation, personal networks? Is it better emailed or delivered personally? Is it better if it is 1 page? 2 pages? 4 pages? 6 pages? What about the writing style? How many tables should it have? What colours are the most appropriate? Surely the effort to test once and for all if a policy brief works should lead to answer some of the questions. Otherwise, I think I’ll stick with common sense.

I am not trying to discourage the use of RCTs but I do feel that this is a bit too much. None of this is new and it did not need to be studied in this way. And I think that the medicine metaphor has been taken too far, as well. Of course, do not take my word for any of this. Do, please, read the paper and make up your own minds.

My reaction is in part fuelled by the fact that the cost of the study would have paid for a very good communications expert to work for a year at a think tank in a developing country. In fact he or she could have helped a few think tanks in that country using the various channels and tools available to them and, most of all, using experience and common sense. The USD20k plus (I must say that this full disclosure is worth mentioning and applauding) spent could have, for example, paid for a couple of years of running the ebpdn in Latin America -which now does not have an active facilitator. It could have funded several study tours, a regional conference like the one organised in Latin America last year where about 40 think tanks came together to learn from each other and present original papers and ideas. I am sure you can think of other better ways of using this money.

As I have in the past recommend researchers and think tanks to just get on with it. Do not wait for the RCTs on blogs, opinion pieces, videos, twitter, working papers, etc. There are lots of ways to communicate research, and all you need to do is pick the mix that works for you and your organisation (for no other reason than it makes sense and you have actually thought about it and that it is within your reach), and then make sure you do things right. If you need help there are people (like Nick Scott, Jeff Knezovich, Vanesa Weyrauch, Laura Zommer or Lawrence MacDonald and others) out there thinking about this and who I am sure willing to lend a hand and provide some thoughtful advice. But remember that a poorly written briefing paper, a press release a day too late, a busy and static website, a boring event, or a poorly scripted video will not work regardless of the brilliance of our ideas; and that will be more a testament of our incapacity than of the ineffectiveness of the tool.

As an afterthought. It is interesting that the study did not consider if a policy brief with proper and robust research would have been more influential than one with questionable research -even if it had a clear message, was written by someone who is well-known, was nicely designed and accurately targeted. Now THAT would have been interesting.

The response to my post has been mixed on the ebpdn discussion list. 3ie, IDS, and others have defended the paper and the innovative effort of the researchers:

Kirsty Newman:

On Enrique’s question about whether it told us anything new, I agree that the findings support my pre-existing suspicion BUT remember that not everyone thinks like we do! There are plenty of examples of research communication strategies where the final objective is the production of a policy brief (or creation of a portal or holding of a seminar or whatever). This research is important evidence which helps build a case that if you seek to influence policy, you need to do more.

Maren Duvendack from ODI:

I have taken quite an interest in the 3ie-IDS-Norad study as I actually think that it is a pretty good idea! Looking at the influence of research on policy is pretty tricky (that’s essentially what the study is trying to do). Mainly qualitative tools are used in this context and many people (mainly economists!) find this very frustrating and crave some sort of quantification of the policy influence of research and the study does exactly this! Quantifying policy influence of research is not new, some people (mainly economists again!) tried calculating internal rates of return for example but that’s pretty flawed too.

Anyway, this is just another RCT and given their limitations one should always take their results with a pinch of salt!

But there have also been some voice supporting my critique:

Nick von Behr from behroutcomes.co.uk

As a bit of an outsider to Development I agree with everything that Enrique says in terms of policy influence. Better to spend the money on actually doing it through trial and error rather than taking a hugely scientific approach, laudable though the efforts have been. And as he says what matters is the balance of research evidence that actually supports/contradicts a new policy direction in whichever field (mine is education).

And from an actual communications practitioner like Maryam Mohsin also at ODI:

As a long time comms professional I find it a bit bizarre that so much time, resource and effort is being invested into applying RCTs to test how we can influence policy. Are we really trying to find a one size fits all way to communicate? Anyone with a bit of common sense will tell you how far that will get you in the complicated world of policy influence. And any communications professional who needed this RCT to tell them the key messages it contains should be fired immediately (I say in half jest – but I do wonder who this study is aimed at).

I simply do not believe policy influence is an exact science, it’s based on capacity, dialogue, strategy, pro-activeness and reflection. I think we need to stop trying to find ways of putting square pegs into round holes and shift away from placing emphasis on the tools, and move towards placing more emphasis on the strategic use of these tools.

We play a small part in the whole process of policy making, through to influence. We need to humbly recognise this and spend more time looking at what we do have the capacity to contribute towards and focus on being strategic in our attempt to give ourselves a better chance of success

This point made by Maryam echoes a follow up email I sent to the community in which I suggested that:

I see no future in this line of questioning. Sure, it is fun and I can see why a researcher would like to do it -but it is, in my view, useless and dangerous. I would encourage us instead to support organisations to reflect on what they do and how they do it. Invest in people. See Laura zommer’s fantastic arguments for inspiration. But don’t do what she says, follow her thought process and arrive at your own conclusions. Involve your communicators in strategic discussions. Involve your researchers in thinking about how they communicate and what influences change in their own policy communities. Come together in exercises like these supported by GDNet or the richer cases that this network supported in Latin America and Africa. THINK about it, don’t just wait for proof. There will be no proof. The IDS study’ caveats are way too large and significant to take any of its conclusions seriously. The best way forward is more and richer critical reflection (not just descriptive cases of successes).