{"id":1934,"date":"2012-01-06T15:56:19","date_gmt":"2012-01-06T20:56:19","guid":{"rendered":"https:\/\/onthinktanks.org\/?post_type=resource&p=1934"},"modified":"2017-08-07T09:53:36","modified_gmt":"2017-08-07T14:53:36","slug":"a-pragmatic-guide-to-monitoring-and-evaluating-research-communications-using-digital-tools","status":"publish","type":"resource","link":"https:\/\/onthinktanks.org\/resource\/a-pragmatic-guide-to-monitoring-and-evaluating-research-communications-using-digital-tools\/","title":{"rendered":"A pragmatic guide to monitoring and evaluating research communications using digital tools"},"content":{"rendered":"

How do you define success in research communications efforts? +<\/span><\/span> Clearly, if policy influence is the name of the game, then evidence of communications having played a role in policy change is an appropriate barometer. Trouble is,\u00a0there are a number of conceptual, technical and practical challenges to finding evidence, and being able to use it to measure the success of an individual or organisation<\/a>. There can be occasions when communications staff can show they played a role: perhaps someone has organised a series of meetings with policy-makers as part of a communications strategy, or the media have picked up a new report pushing an issue to the top of the policy agenda. However, given the complexity of policy cycles, examples of one particular action making a difference are often disappointingly rare, and it is even harder to attribute each to the quality of the research, the management of it, or the delivery of communications around it.<\/p>\n

\"ODI
ODI Dashboard<\/figcaption><\/figure>\n

I have recently created a monitoring and evaluation (M&E) dashboard to track how ODI outputs disseminated through our main communications channels fare. This brings together qualitative and quantitative data collected digitally, including a range of data available from new digital tools used on our website and online. In this blog, I outline some of the lessons I\u2019ve learnt in the process of creating the dashboard and investigating the data, a framework I\u2019ve developed for assessing success, and list some of the key digital tools I\u2019ve encountered that are useful for M&E of research communications.\u00a0If you\u2019re only interested in the tools and applications, please do jump right to the end for the list.<\/strong><\/a><\/p>\n

The M&E dashboard\u00a0builds on previous ODI work on M&E of policy influence<\/a>\u00a0to provide an overview of how research communications work for the organisation, and its programmes and project teams. To try to give full picture, I\u2019ve included data that could give insights into the quality and management of research, as this can sometimes be the reason for the success or failure of communications \u2013 if the research being communicated is of low quality it may be harder to achieve success, or any success may be counter-productive as it can damage reputations and brand. My aim is to create a set of benchmarks for the organisation, to be able to assess success in reaching and influencing audiences, identify what factors led to that success, and act upon the findings. In particular, I want to use this information to inform future communications actions and strategies.<\/p>\n

Be pragmatic, part one: only measure what you can measure<\/h2>\n

Digital tools do not offer a panacea for the measurement of policy influence: it is unlikely that tools will ever be available that can report on exactly who is reading or engaging with particular pieces of content, what their jobs are, their specific role in policy and their intellectual reaction to any content they read. If anything, the current direction of change is moving further away from this:\u00a0Google recently updated the way it reports on searches<\/a>\u00a0to remove very useful data on keywords due to concerns that this could identify the searching habits of individuals and infringe on their privacy.<\/p>\n

In lieu of specific insights into how communications play a role in policy cycles, what can you do to measure research communications reach and efficacy? A few years ago,\u00a0Ingie Hovland of ODI proposed five levels to assess policy influence efforts<\/a>. Many of these levels apply directly to research communications and, taken together, they offer a framework for assessing the success or otherwise of research on policy debate and processes \u2013 a useful barometer of success where the evidence of direct impact and policy change is hard to come by or define. \u00a0The levels are:<\/p>\n

    \n
  1. Strategy and direction<\/strong>: The basic plan followed in order to reach intended goals \u2013\u00a0was the plan for a piece of communications work the right one?<\/li>\n
  2. Management<\/strong>: The systems and processes are in place in order to ensure that the strategy can succeed \u2013\u00a0did the communications work go out on time and to the right people?<\/li>\n
  3. Outputs<\/strong>: The tangible goods and services produced \u2013\u00a0is the work appropriate and of high quality?<\/li>\n
  4. Uptake<\/strong>: Direct responses to the work \u2013\u00a0was the work shared or passed on to others?<\/li>\n
  5. Outcomes and impacts<\/strong>: Use of communications to make a change to behaviour, knowledge, policy or practice \u2013\u00a0did communications work contribute to this change and how?<\/em><\/li>\n<\/ol>\n

    The assessment levels are generally cumulative too, so success at level 5 implies some element of success at levels 1 to 4.<\/p>\n

    By bringing together different statistics from a number of sources, our M&E dashboard aims to help in the development and evaluation of the first of these levels \u2013 strategy and direction \u2013 by providing benchmarks for planning and assessing success at the remaining four levels. Examples of statistics and information that can be gleaned from digital systems and used for different types of communications work include:<\/p>\n\n\n\n\n\n\n\n\n\n\n\n
    \u00a0<\/strong><\/td>\n\n

    Assessment level and potential information to benchmark<\/strong><\/p>\n<\/td>\n<\/tr>\n

    Mgm’t<\/strong><\/td>\nOutputs<\/strong><\/td>\nUptake<\/strong><\/td>\nOutcomes & impacts<\/strong><\/td>\n<\/tr>\n
    Website<\/strong><\/td>\nA good split of sources for web entrances (search engine v. email and other marketing v. other sites)<\/p>\n

    Number of website visitors<\/td>\n

    Website survey, such as 4Q<\/p>\n

    Search engine positioning<\/td>\n

    Click of \u2018Share\u2019 button on home page<\/p>\n

    Social network mentions of site as a whole<\/p>\n

    Subscribers to news feeds<\/td>\n

    Evidence sent to M&E log from emails or personal contacts<\/td>\n<\/tr>\n
    Publi- cation<\/strong><\/td>\nNumber of downloads<\/p>\n

    Split of web entrances (search engine\/ email \/ other sites)<\/td>\n

    Feedback survey<\/p>\n

    Search engine positioning \u2013 keyword analysis<\/p>\n

    Clicks of \u2018print\u2019 button<\/p>\n

    Number of publications produced<\/td>\n

    Citation tracking<\/p>\n

    Social network mentions<\/p>\n

    Clicks of \u2018share\u2019 button<\/td>\n<\/tr>\n

    Blog or article<\/strong><\/td>\nNumber of webpage views<\/p>\n

    Split of web entrances (search engine\/ email \/ other sites)<\/td>\n

    Comments on blog<\/p>\n

    Search engine positioning<\/p>\n

    Clicks of \u2018print\u2019 button<\/td>\n

    Comments on blog<\/p>\n

    Social network mentions<\/p>\n

    Clicks of \u2018share\u2019 button<\/p>\n

    Placement on media site or media mention<\/td>\n<\/tr>\n

    Event<\/strong><\/td>\nNumber and type of contacts receiving invitations<\/p>\n

    Number of dropouts (people who register but don\u2019t attend)<\/p>\n

    Web visits to event page<\/p>\n

    Split of web entrances (search engine\/ email \/ other sites)<\/td>\n

    Number of registrations and dropouts (people who register but don\u2019t attend)<\/p>\n

    Views of catch-up video<\/p>\n

    Feedback survey<\/p>\n

    Online chat room comments<\/td>\n

    Clicks of \u2018share\u2019 button<\/p>\n

    Social network mentions<\/p>\n

    Feedback survey<\/td>\n<\/tr>\n

    Media release<\/strong><\/td>\nNumber of contacts on media release list<\/p>\n

    Subscribers to media news feed<\/td>\n

    Number of media mentions generated<\/p>\n

    Logs of follow up calls from media contacted<\/td>\n

    New sign-ups to media release contact list<\/p>\n

    Logs of follow up calls from new media outlets<\/p>\n

    Subscribers to news feeds<\/td>\n<\/tr>\n

    News- letter<\/strong><\/td>\nNumber and type of contacts receiving newsletter<\/td>\nNumber of click-throughs from items<\/p>\n

    Number of people unsubscribing<\/td>\n

    New subscriptions to newsletter<\/p>\n

    Forwarding of newsletter<\/p>\n

    Social media mentions<\/td>\n<\/tr>\n

    Organ- isation<\/strong><\/td>\nCollation of the above, plus number of social network followers, overall number of contacts<\/strong><\/td>\nCollation of the above, plus overall number and type of outputs<\/strong><\/td>\nCollation of the above, plus indicators of social network influence (e.g. Klout)<\/strong><\/td>\nCollation of the above<\/strong><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n

    In the table above there is a reason why the rows are split into output types \u2013 this is the level at which action based on analysis can most easily be taken. At this level, a communications planner can better choose the right platforms for achieving engagement on certain topics, decide on a case-by-case basis where to post blogs, how and where to advertise new papers, and what to spend money on. Collating all actions taken for each output produced by a programme or organisation and the results of these should provide evidence of organisational progress.<\/p>\n

    Be pragmatic, part two: don\u2019t measure everything you can measure<\/h2>\n

    Even if you stick to assessing and taking action on only those things that are measurable, it is important to avoid over complication by being quite picky in what you do and don\u2019t include in any dashboard. The number and types of statistics that can be tracked is huge after all. Google Analytics, for example, tracks\u00a0hundreds of different pieces of information for every page viewed<\/a>. If you track too many things it will be hard to see the wood for the trees and single out the messages from your statistics. Equally if you choose the wrong statistic, the message won\u2019t even be there to see.<\/p>\n

    When deciding what to include in the dashboard, I had to think carefully about how these statistics were going to be used. As the aim of the dashboard is to provide insights into trends for the viewership, usage and engagement with communications products, I chose specific statistics that would tell this story. So for Google Analytics, I report only on two key metrics to give me an overview:<\/p>\n