{"id":1934,"date":"2012-01-06T15:56:19","date_gmt":"2012-01-06T20:56:19","guid":{"rendered":"https:\/\/onthinktanks.org\/?post_type=resource&p=1934"},"modified":"2017-08-07T09:53:36","modified_gmt":"2017-08-07T14:53:36","slug":"a-pragmatic-guide-to-monitoring-and-evaluating-research-communications-using-digital-tools","status":"publish","type":"resource","link":"https:\/\/onthinktanks.org\/resource\/a-pragmatic-guide-to-monitoring-and-evaluating-research-communications-using-digital-tools\/","title":{"rendered":"A pragmatic guide to monitoring and evaluating research communications using digital tools"},"content":{"rendered":"
How do you define success in research communications efforts? +<\/span><\/span> Clearly, if policy influence is the name of the game, then evidence of communications having played a role in policy change is an appropriate barometer. Trouble is,\u00a0there are a number of conceptual, technical and practical challenges to finding evidence, and being able to use it to measure the success of an individual or organisation<\/a>. There can be occasions when communications staff can show they played a role: perhaps someone has organised a series of meetings with policy-makers as part of a communications strategy, or the media have picked up a new report pushing an issue to the top of the policy agenda. However, given the complexity of policy cycles, examples of one particular action making a difference are often disappointingly rare, and it is even harder to attribute each to the quality of the research, the management of it, or the delivery of communications around it.<\/p>\n