#dhakacomms workshop: Day 2

27 May 2014
SERIES Communication as an orchestra 7 items

[Editor’s note: you can read all the post in the series here: #dhakacomms workshop: Day 1; #dhakacomms workshop: Day 2#dhakacomms workshop: Day 3#dhakacomms workshop: Day 4; plus a learners’ perspective.]

At the end of the first day of the communications workshop at the Centre for Policy Dialogue (CPD) we identified a ‘universe’ of channels and tools.

Think tank communication portfolios

The long list shows that the think tanks involved in the project already do quite a lot of communications. In fact, IGS does a bit more as it just set up its Twitter account: @BIGD_BRACU.

Of course, think tanks cannot possible do everything. They must prioritise. Their portfolio of tools should work well together, now tool ought to be on its own, and, all together, they should allow the think tanks to achieve its objectives and reach all its audiences.

We began the morning with an exercise. Each team was asked to develop communication portfolios for each think tank. They drew from the ‘wall’ and talked to their fellow participants about the tools they were using. They took the following into consideration:

  • Avoid overcrowding: Do I need an “Opinion Piece” if I already write “blogs”? Do I need 2 or 3 Newsletters?
  • Economies of scale: Does it matter where I hold the event or who are the panelists? How different is it really?
  • Focus on what you can do: MOOCs sound like a great idea but my internet connection may not be fast enough
  • Focus on what you should do: I can do a news report but should I?
  • Think tank or researcher: Personal FB pages and Twitter accounts are 1) personal and 2) an opportunity
  • Avoid isolation: Never publish a working paper without a blog; never organise an event that is not filmed/recorded; etc
  • Are ideas at the center of everything we do? Or is the ‘format’ more important?>

The final portfolios reflected both what the think tanks were doing already as well as tools they wanted to try out.

‘Rules’

Based on these portfolios, the teams then attempted to create some ‘rules’.

Rules are important for a number of reasons. In the orchestra metaphor, the rules represent the main music script and the partitures of the musicians. This is what the orchestra is set to play. This is what it is ready to play.

It saves time if everyone knows what should be done. It also allows people to practice. Each ‘set of rules’ becomes like a game that the think tank can play over and over again and learn in the process. When PMRC in Zambia began implementing their set of rules, the communications outputs they produced were not all that good. But soon, after a few rounds, they got better and are now as good as you get.

Rules make it possible to test new tools. From set to set the think tank can make small changes –introduce or remove tools- to see what they effects are. It can for instance, organise an event using EventBrite and see if it makes a difference; or try to contact journalists via Twitter instead of email and see if more show up.

Examples of these tools are:

  • Working paper(s) and policy brief(s)
  • Event and Twitter and blog
  • Research Report and blog and interview and event
  • Event and video and Twitter and Event Report (why not on Storify?) and interviews and publications

Tools must be flexible but not entirely random and should avoid isolation. An isolated tool (a paper published on its own) is as good as nothing.

These rules, of course, cannot be developed in isolation of the reality that the think tank deals with. They ought to consider:

  • The level at which they should apply: organisation, programme, project, researcher
  • Political space that the think tank is targeting: international, national, subnational, local
  • Technical level of the audiences: politicians, technocrats, informed public, general public
  • Importance for the think tank: priority issue/theme, long-term agenda interest, one-off study
  • The think tank’s objectives: agenda, decisions, implementation, evaluation.

The participants worked in teams to develop rules for some of their key communication tools: research report launches, books, and dialogues.

Here is an example from CPD:

Monitoring

[Note, for a longer discussion on M&E of think tanks as well as practical tools please visit the On Think Tanks M&E Topic Page]

There is a great deal of emphasis an interest in monitoring and evaluating the impact of research and of think tanks specifically. While donors and some researchers and consultants insist that it is possible to measure (or to approximate some measure) of this influence the fundamental challenges involved in attempting to establish a clear relation between think tanks activities and policy outcomes and impacts remain.

These include:

  • The problem with attribution, as well attempting to measure the relative contribution a think tank may have made to a policy change –although it may be possible to describe the contribution made to change, measuring it (seeking a percentage as returns for investments do, for example, is as problematic as claims of attribution.
  • Methodological weakness –those concerned with this view consider that to be robust and accurate in the study of the impact of think tanks one would have review all possible avenues of direct and indirect influence, for instance, through the development of capacity in current and future policymakers, changes in the research, media and public agenda in which think tanks may have played a role, movements of think tank experts into policy, the effects of formal and informal meeting over an extensive period of time, etc. Furthermore, they would need to also consider all other sources of influence and their, also extensive, possible avenues of influence. The cost of such an effort would be so high that it would e prohibitive for think tanks themselves.
  • Even if we were able to find a clear relationship between think tank outputs and outcomes in one instance we could still not say anything about their influence over all. A particular strategy might have worked for a think tank one time but this does not imply it will work every time. The effect of factors outside the control of think tanks is likely to be far more significant than anything the think tank could have done.
  • A focus on outcomes and impact, at the expense, of what the think tank actually does (inputs and outputs) reduces the chances for learning.

A focus on outcomes and impact also places too much emphasis on communications. There is, however, little evidence that this is what leads to substantive influence. For example, Andrew Rich, in a study of think tanks in the US found that the most important factor explaining a think tank’s influence on policy decisions was their role in identifying the policy problem. Communication, given the right context and demand from policymakers and other more powerful political players, could help position the think tanks ‘in the right place’ but it was their business model and research agenda that eventually gave them an edge.

Finally, this attention emphasises accountability over learning. And it is particularly interested in the accountability to foreign funders rather than to domestic stakeholders.

Monitoring inputs and outcomes on the other hand presents a number of opportunities:

  • First of all, it offers individuals and the organisations with useful information about what they are doing: how well are they doing it, and who they may improve? This is, after all, what is within their control and responsibility.
  • It positions communications as an interconnected and interdependent part of a think tank: sharing responsibility with research and management, without which it would be impossible to say anything substantial about a think tank’s performance. In other words, it asks questions related to how a communication activity was planned and executed and the quality or relevance of its message, as well as its possible effect.
  • This emphasis on inputs and outcomes combines accountability (for doing a good job) with learning.

Impact/outcomes are good to keep in mind when planning (in order to define the most appropriate outputs and inputs) but monitoring and evaluation should focus on the correct delivery of those plans and not on the achievement of those outcomes.

As an exercise the participants were asked to, for each channel:

  • Describe ‘what they had to know about the tools that would tell them if they were doing a good job’. This exercise asks them to, in clear English, avoiding jargon and indicators, ask the kind of questions (all of them) that they would ask if they wanted to know if a communications tool (e.g. a working paper, an event, a video, a press meeting, etc.) was done properly.
  • Then, based on these questions, they identified 2 or 3 indicators.

For example, SDPI considered:

Channel: Digital

Tool: SDTV

What would we need to know? We would like to know who is watching SDTV shows, where they are watching from and what type of feedback we get from them? Whom and what we are covering on SDTV and how important and relevant they are? It would be useful to know how constructive is the communication between the producer and researchers, weather we are following media ethics, if the reporting unbiased and reflects the research in its true form, etc. Also relevant is knowing who is funding it as it dictates the duration of the clip and webspace, as well as how much each clip costs to produce.

It would also be important to know if SDTV is considered ‘’ by its audience and sector experts: Are people coming back? Is it accessible?

Indicators (examples):

  • Number of viewers has SDTV
  • Number of viewers from the research community and donor community
  • Number/percentage of viewers are regular viewers
  • Number of requests from state and private organisations to cover their events
  • Number of requests from mainstream media to show SDTV

Channel: Events

Tool: Monday Seminar

What would we need to know? We would like to know if the topics covered at the Monday Seminars are relevant to the current affairs of Pakistan. Do the speakers and experts present at the panel represent all sides of the story –are we being balanced and open? Is media covering the event? Is it being followed on social media and what are people saying about the event there? Has there been any effect or follow-up after the event? Who was responsible for carrying it out? How much did the event cost to organise?

Indicators (examples):

  • Number of participants
  • Characteristics of the participants (number/percentage of students, policymakers, opinion leaders, etc.)
  • Relevance of the topic to the audience present
  • Number of mentions on the mainstream media

After trying this process out for most of the tools in each channel it became be clear that, by and large, the questions we ask (and the necessary indicators) are more or less the same for all the tools within each channel. It would be impossibly expensive to gather information on every possible indicator. Instead, think tanks must choose those what allow them to tell a compelling story (build a compelling case) about their work.

Crucial for this is that think tanks are aware of benchmarks in their country, region and sector. They must reach out to other think tanks to find out how may views they get on their videos, number of downloads, participants at their events, etc. These numbers and statements of relevance of quality need to be placed within a context. And when there aren’t sufficient think tanks to compare with then they will have to be inventive and look for other comparators: NGOs, consultancies, etc.

Follow-up

Early on day 3 of the workshop we got together to talk about lessons learned or interesting ideas from the day before. They highlighted the following, among others:

  • Managing the media. It is not enough to send an invite. Journalists need to be managed, invited with sufficient time, called and reminded by email and text before the event. Follow up is also important.
  • Publications. Using barcodes, ISSN numbers and QR codes can help professionals and readers find and track papers an books.
  • Livestreaming is not enough. Alongside livestreaming think tanks need to offer moderation, comments, encouragement for others to participate online, etc. It takes time to build an online audience but it can be helpful to capture people’s attention (so that they will go back to the video after the event) or reach foreign audiences –or audiences far from the events.

Next post here.