The 2014-2015 Ebola epidemic caused major loss of life and socioeconomic disruption across Guinea, Liberia and Sierra Leone. In May 2016, the World Health Organisation (WHO) reported over 11,310 deaths. While the loss of life was tragic, there was an untold story: the disproportionate burden on women. According to the Liberian Ministry of Health, 75% of cases in the country were women.
Despite this high rate of female cases, sex disaggregated data was not made available by WHO until December 2014. Without this data, the Ebola response team was not fully-informed. What’s more, at the time of writing, the WHO Ebola Response Roadmap still does not contain gender indicators in its suggested monitoring and evaluation framework.
The fact is, how development researchers and practitioners supply and use data can have gendered consequences. The question is: in what ways is evidence gendered, what are the implications and what can we do to address it?
Earlier this month, the Overseas Development Institute and On Think Tanks hosted a roundtable to discuss women in think tanks and the intersection of evidence-based policy-making and gender. The event was attended by academics, think tank researchers, NGOs and academic journal editors.
The event didn’t aim to find all the answers, rather to get the conversation started and – critically – to move the debate beyond the usual ‘add women and stir’ solution.+
Here are my two key take-aways from the discussion:
Whose data is important?
If we want gender sensitive policies, we need gender sensitive data. There’s an interesting organisation called Project Data 2X dedicated to increasing the prevalence of gender disaggregated data and I particularly like their tagline: ‘without data equality there is no gender equality’.
We often hear that gathering gender-specific data is ‘too expensive.’ But that is only because it often an after-thought, tacked on at the end of programme design or implementation. Researchers still struggle to ensure that gender is integrated from the start.
So in some instances, data are simply not available. But in other instances, data is available, just not being used. The Ebola case is a good example: despite data demonstrating the disproportionate number of women who died during the epidemic, the WHO has not used the information to improve its Crisis Roadmap. Including gender indicators in the Roadmap would be the first step to ensuring that any future crisis response is better equipped.
We need to look at the evidence processes
Hiring more women for research and decision-making roles would go some way to addressing the problem. But when it comes to gender and evidence, it doesn’t go far enough.
We need to deal with the evidence processes themselves. What type of evidence is used in policy-making? Whose evidence is used? How is evidence interpreted? And are certain voices being preferentially treated?
For example, look at the methods we use to gather evidence to inform policy. Impact evaluations and systematic reviews are considered by some to be the ‘gold standard’ of research methodologies. But these approaches often don’t capture different social norms that exist between men and women. Mixed methods – with qualitative indicators and a focus on complex change pathways – are much better at capturing gendered experiences (as discussed in this blog).
Finally, this isn’t just about women. Gender analysis can be conducted both by, and for, men and women. We don’t need to wait for a critical number of women to be reached for organisations to produce and use good research. Assumptions underlying research methods and the use of data need to be made explicit in order to draw out and correct biases. Women do not automatically recognise implicit biases against them just because they are women. It is up to all of us to produce better policies for everyone involved.
Please share your thoughts, questions and reflections in the comments to keep moving the debate forward.