Should think tanks write policy briefs? What an RCT can tell us

30 March 2012

There is no doubt that policy briefs are an important tool for research communications, especially for policy-oriented think tanks. A recent study from CIPPEC found that 80% of think tanks surveyed across Latin America, Africa and Asia produced some form of policy brief.+

‘Yeah, but do policy briefs actually DO anything? Do they actually have impact?’

As someone with ‘policy influence’ in my job title, and as someone who trains researchers and other research uptake professionals on the development of policy briefs, it’s a question I get a lot.

In fact, I remember in January 2011 sitting in Delhi at the headquarters of the International Initiative for Impact Evaluation (3ie), and being posed that exact question by their Senior Advocacy and Policy Officer, Christelle Chapoy. The answer is, of course, both incredibly simple and incredibly complicated.

The simple answer

A policy brief is a piece of paper. It doesn’t DO anything, and is therefore unlikely to have impact on its own.

This is something I try to remind people of all the time when discussing policy influence and research uptake. In fact, most communication outputs by themselves probably aren’t very impactful. It is a shame, then, that these outputs tend to be listed as deliverables in contracts with funders and thus tend to become viewed as an ‘end’ rather than a ‘means to an end’.

Here’s where it get’s a bit more difficult. Policy influence requires a concerted effort to engage target audiences at different times with different information and messages – which is why putting together a clear strategy is such an important first step. The type of engagement necessary will depend on the context within which one is working and might depend on a number of factors including the stage of the policy cycle, how controversial a topic is, the capacity of the policy makers to understand evidence and research, the strength of the evidence, etc. (see the RAPID framework for a good list of things to consider).

Getting something on the policy agenda, for instance, requires different information (and a different strategy) than improving the implementation of an established policy. This, in turn, requires a variety of targeted communication products.

Probably even more important is some sort of human interaction – never underestimate the power of a cheeky pint (or more culturally appropriate informal meeting, whether it be tea or coffee, or even a breakfast) with your key audience. In that context, passing them a policy brief might be an easy way for them to recall parts of your discussion, or act as a way to establish credibility in your discussions.

The (even more) complicated answer

OK, so we know that it probably takes more than a piece of paper to influence policy. That said, can some policy briefs be more effective than others? Absolutely.

At a seminar earlier this week at IDS entitled, ‘Research, Knowledge, Action – What can a policy brief really achieve?’, researchers from IDS and 3ie (Penelope Beynon, IDSMarie Gaarder, formerly of 3ie; and Edouardo Masset, IDS) presented some initial findings from a research study looking into the utility of policy briefs as a tool for research uptake. And I was glad to see an attempt to answer the question posed over a year ago.

Using a randomised control trial, or RCT (which presented its own series of challenges), they tested to see what sort of actions a relevant audience might take. They were particularly interested in digging deeper into a finding from Jones and Walsh (2008) that indicated that nearly 80% of researchers and policy makers in the science, technology and innovation field valued the opinion of the researcher in addition to just presenting basic facts. In other words they were looking at the role of personal authority and credibility.

After inviting participation from some 70,000 individuals through IDS and 3ie contact lists, 807 participants self-selected to participate in the study (which, says a lot right there!). Participants were generally well educated and interested in the field of agriculture and development. Participants self-identified their level of influence at various levels of policy – some indicated they had more and some said they had less.

They divided the sample into four groups. The control group was given a policy brief from IDS vaguely related to the topic of the study. The first experimental group was given a two-page brief that summarised findings from a systematic review on agriculture and nutrition, particularly looking at bio-fortification and home-based farming. To another group they circulated the brief and attached a two-page opinion piece by Lawrence Haddad, Director of IDS. For a third group, they circulated both the brief and the opinion, but this time from an unnamed IDS research fellow.

I’ll leave it to the researchers to publish a more thorough analysis of their findings, but a few points from their study I found particularly interesting:

  • Confirmation bias applies, even to policy briefs: I’ve written before about the role of confirmation bias in policy making. Confirmation plays out in a lot of ways, one of which is that when presented with evidence that challenges existing beliefs, it tends to reaffirm rather than change that mis-held belief. In the study, they found that participants who didn’t have any particular opinion on bio-fortification changed their opinion in line with the evidence presented in the brief. But those who had already formed an opinion didn’t budge – even, or perhaps especially, if it was contrary to their own beliefs.
  • The ‘Lawrence effect’ is insignificant: Sorry Lawrence! We may love you here at IDS, but adding your opinion to the policy brief only had a minor but statistically insignificant effect on whether or not people changed their opinion on the topic or passed the brief on. I should add that using the opinion of an unnamed research fellow didn’t have an effect either, so Lawrence, you may be off the hook! Don’t stop blogging just yet. And on that note…
  • ‘I’d rather change policy than write a blog!’: When it came to what sort of follow up actions resulted from reading the brief, simple actions like passing it along, or telling someone about the findings were most likely. The least likely action cited by respondents was to write a blog about it… even less likely than actually changing a policy. There are any number of explanations for this, including the study sample and their self-perceived responsibilities, but it certainly confirms that convincing researchers and policy makers of the value of blogging is an uphill battle.
  • Getting the wrong message: Over 50% of study participants shared the findings of the policy brief. Unfortunately, when queried about the key message of the brief in a qualitative follow up interview, many weren’t able to accurately recall the message. Some interviewees even recalled key messages contradictory to those intended. Whoops! This may have been because of a poorly crafted brief, or more likely, it may have to do with the fact that it was summarising a systematic review, a research approach that tends to end in more questions than answers… and therefore rather unclear messages. Maybe they could have applied Enrique’s taxi driver test to refine their messages. Or, as Ariana Huffington recently pointed out, perhaps we shouldn’t fetishise the notion of ‘going viral’. It also reminds me of previous research by HP Labs that noted how quickly the quality and accuracy of messages decreased as they spread through communities – though I hadn’t expected it was as short as the first share.
  • Women less prone to share information than men!: It’s the attention grabbing headline that Marie proposed, if rather glibly, the media might make of the study. But the gender divide was found to be consistent throughout the study, even controlling for a variety of factors. So the question remains, what is it about policy briefs that make them seemingly less relevant for/to women?

It’s clear that randomised control trials can tell us something about the utility and use of policy briefs. But I would also argue that RCTs are not the only arbiter of good knowledge. As such, in the next post I will reflect more on my past working with and developing policy briefs for a range of audiences, to see what experience can tell us about how to make policy briefs more effective.