February 9, 2015

Case study

How on earth do you measure the impact of your events?

Measuring the impact and quality of events is a murky field. Think tank events often have multiple objectives, take many guises (from the ambitious conference to the closed door round table) and are targeted at very different audiences. Sometimes the value is instant and obvious, for example strong media coverage, but at other times it’s not so clear.

recent On Think Tanks piece spelt out the need for events to be strategic and the importance of careful planning; above all, when an event hasn’t been well prepared or follows a stale format, it really shows. I couldn’t agree more. Over the years, I’ve seen poor turnouts and a few disasters, but the real danger often lies when events just tick along with little obvious purpose to the detriment of time and resources. Sure, events are sometimes a gamble or opportunistic, but nonetheless, with a clear strategy they can have real engagement value.

But how can you know if an event has impact and that it was worth all that effort?

This was just the question we came up against when planning how to measure the impact of a series of events for the DFID-ESRC Growth Research Programme (DEGRP). Our events have multiple purposes, vary in size, and take place in countries like the UK, Ghana, USA or Bangladesh. We wanted to know how we could measure the quality of our events in a way that recognised their individual objectives, but ensured that they aligned with our overall programme strategy. In the past, we have often turned to the audience feedback form, but as we found, in some contexts it’s just not appropriate.  You simply can’t ask ministers or senior government officials to fill in a feedback form. Faced with this dilemma, Louise Shaxson and I decided to develop our own event assessment tool to build learning into our event planning. It’s still a trial, but here’s what we’ve come up with:

Did someone say objectives?

It’s the simple things in communications that hold the key to better strategy and events are no different – if you look to your objective(s) in planning and in M&E you can’t go too wrong.

In my fundraising days, event objectives were fairly straightforward and easy to track – to raise money or gain new supporters – so M&E was all about marking this and reaching your target. But for think tanks, there are a variety of different issues that contribute to the success of an event. Such as:

  • The degree to which an event stimulates debate and new thinking
  • Whether it increases knowledge
  • Whether and how it influences the wider debate
  • How it contributes to convening and networking
  • How it contributes to raising awareness and profile
  • The quality of the debate
  • The inclusiveness of participation
  • The timing of the event (in the wider context of an issue)
  • The reach of any message
  • Whether it generates practical advice or feedback

There may be more, and an organisation or project needs to develop a tailored set to fit its overall purpose. You could monitor all of the above if you wanted to but we think you would run out of money and/or go mad. It is better to choose a selection that really reflects what it is that you’re trying to achieve. For DEGRP events, we chose four categories aligned to our engagement strategy:

  1. Stimulating debate or new thinking
  2. Influencing (the wider debate or stakeholders)
  3. Convening and networking
  4. Raising awareness and profile

Not all our events will have all four objectives; they might only have one or two, but the key is deciding for each event, which category is critical to measure success and using that to inform the event planning.

What sort of learning and evaluation questions should you be asking?

We put together the following table and identified some key questions to ask for each category, as well as some ways we might collate the responses. The benefit of using a tool like this is that you can have a different data collection system for each event, but still harmonise results.

Category Questions Ways to collate responses
Stimulating debate or new thinking Was the event thought provoking?

Were new issues raised?

Were there any breakthrough discussions?

Was there any follow up debate from the event?

 

  • Feedback forms,
  • Follow up emails or post event discussions (e.g. email from funder),
  • Follow up events or blogs,
  • Comments made by participants during the event.
Influencing Was there a high level/quality of speaker(s) and/or participants?

Was action taken as a result of the event?

 

  • Follow up interviews,
  • Guest list,
  • After action reviews,
  • Direct questions to participants on whether or not the event was influential may not work, particularly if the issue is a political one or politicians participate.
Convening and networking Did the event build new networks?

Did it bring together new sets of stakeholders into the room?

  • Follow up feedback forms,
  • Informal conversations
Raising awareness and profile Did the event raise awareness or build profile for the project/programme/organisation or a specific theme?

Was there any media attention?

  • Number of attendees (audience size),
  • Dropout rate,
  • Online web stats,
  • Social media stats (comments, retweets, mentions, new followers),
  • Media mentions.

We do use feedback forms where they are appropriate, but generally the learning is self-reflective; we ask ourselves the questions, collate responses and if working with a co-host or partner, ensure we build in their views.

Would a score help?

In DEGRP’s case we decided to take it one step further and use a scoring system. While you could very easily just focus on building your learning using the above, having scores may be helpful, particularly when you have to report to funders.

You need to decide when and where you will discuss the scores: who will use the information and for what purpose. If engagement is a central part of your project or programme you may want to discuss the scores more regularly than if it is a smaller part of your work.

For DEGRP we chose a number and then described what each score means. For example we chose 1-5 as our scoring (0 if the category is not appropriate). For Influencing, 1 might be “the speaker was not of high level/quality”, while 5 is “highly influential attendees participated fully in the event and subsequently took actions that could be directly attributed to the event”.

When assessing the event, it is important to record the rationale for each score and if the assessment was not done in collaboration with your event partners, it is good practice to have an independent view on the scores and rationales. It is also a good idea to discuss the results in appropriate forums and record any changes you make as a result.

For individual events you can present the results as a table, while for multiple events you can produce a frequency graph with stacked columns or the radar graph facility in Excel (which we’ve found particularly useful). This produces a ‘spider web’ diagram from which you can get a very quick overview.

caroline 1

caroline2

After several events, you begin to see patterns emerging and then can decide if upcoming events need to be more focused around one of the categories. The important point is that you don’t need to score highly across all categories for all events, but over a year, you would want to see that all categories have scored highly for some events.

The tool works quite well for us, though it is still a work in progress. It doesn’t pretend to be rigorously scientific, but is a learning instrument that can be adapted to multiple objectives. We welcome feedback and would be keen to know how other organisations who run a range of events do their M&E. At the very least, it is important to have these kind of conversations as part of your strategy so that you are not just pumping out events that waste resources and don’t meet your programme objectives. Instead you are actually thinking about engagement with your audience. After all – isn’t it about them?

About the author:

Caroline Cassidy:  Communications Manager at Overseas Development Institute in the United Kingdom

Read more from: Caroline Cassidy

Comments