{"id":7309,"date":"2019-08-08T11:07:01","date_gmt":"2019-08-08T16:07:01","guid":{"rendered":"https:\/\/onthinktanks.org\/articles\/\/"},"modified":"2019-08-08T11:07:01","modified_gmt":"2019-08-08T16:07:01","slug":"should-researchers-get-to-pick-who-wins-and-who-loses","status":"publish","type":"post","link":"https:\/\/onthinktanks.org\/articles\/should-researchers-get-to-pick-who-wins-and-who-loses\/","title":{"rendered":"Should researchers get to pick who wins and who loses?"},"content":{"rendered":"
This is article is challenge statement<\/em>. It is meant to elicit a reaction. Please, do join the discussion.<\/p>\n Evidence Action recently announced that it has decided to close down a project, No Lean Season<\/a>, in Bangladesh. The project was not delivering its expected outcomes, it was fraught with implementation challenges, it was not working. “This is a good thing”, have said some of their fund<\/a>ers<\/a>. In fact, Evidence Action has been praised for shutting it down and, supposedly, accepting its mistakes: if only<\/em> other NGOs did the same.<\/p>\n But I think there are two problems with this praise. First, this isn’t what the story is about – or what it should be about. Second, Evidence Action don’t seem to have accepted responsibility for what went wrong – and I think they (and others that follow their approach) have failed to see the deeply troubling ethical dilema that their approach implies.<\/p>\n I should say that this article is not about Evidence Action. They are one of many other initiatives pursuing a research as decision maker approach that I find troubling and in need for greater discussion. Their’s is simply a recent case and one that has been relatively well documented -by them.<\/p>\n Who is responsible for the outcomes of policy mistakes? What happens when researchers get involved in the implementation of policy ideas that go wrong?<\/p>\n This is a conversation that I have had with thinktankers and their funders across the world who, for many different reasons, are either motivated by the opportunity or concerned by the pressures they face for greater involvement in the implementation of policy ideas. Smart think tanks do no take this lightly:\u00a0CIPPEC<\/a> has struggled with this in relation to political reform in Argentina.<\/p>\n This discussion is linked to the often-abused \u201cthink and do\u201d label, to the rise of the \u201cwe know what works because an RCT says so\u201dmantra and to funding practices that increasingly bundle up research with implementation.<\/p>\n Back in the late 2000s researchers at ODI, where I worked, engaged in a discussion about whether the think tank should get involved in the delivery of projects funded by Aid agencies or if it should limit its work to undertaking research, evaluations and providing independent advice. Those against the move into implementation argued that ODI would lose its intelectual autonomy. Those in favour argued that it would help ODI learn more about implementation and it was also a way of saying: we put our money where our mouth is (only, it was not our money). But for ODI this meant delivering capacity building projects, managing research programmes in country and, maybe, actively advocating for a policy recommendation beyond traditional research communications.<\/p>\n This was the direction that most funding had been taking – for think tanks in developing countries or for policy researchers in the development field- and continues to. Funders are not so interested in the generation of knowledge alone. They want the knowledge they fund to inform, influence and even shape policy design and its delivery. They want to measure the income raised or count the children who graduate from high school. Simply suggesting how governments might achieve that is not enough.<\/p>\n As a consequence, they increasingly design or are driven to programmes that include research as a means or a support for delivery – but rarely research as the main purpose.<\/p>\n I had conflicting thoughts about this. At the time, I sided with the implementers at ODI. I have not entirely changed my mind but I have learned that stepping over the line ought to come with greater responsibility and accountability.<\/p>\n This is having an important effect on the nature of researchers’ work. In my opinion, when researchers get involved in these efforts they risk losing the autonomy to study and discuss them critically and openly. Their policy ideas are no longer \u201cjust ideas\u201d; they become inputs into a narrative that supports the interventions. They are accessories to their potential success or failure. They are complicit in their welfare gains and loses.<\/p>\n This has never been more relevant than in the context of the experimentation agenda that has been rapidly adopted by funders, governments and researchers alike (well, by some researchers).<\/p>\n Experimentation is a great tactic to convince doubtful policymakers that a policy idea is in fact a good idea -one that is worth putting a lot of resources behind. When advising think tanks on their communication strategies we recommend that their policy arguments should use the evidence generated by others -including evidence of successful pilots or from the full implementation of the policy idea elsewhere.<\/p>\n This is a significant source of power in any policy argument. Peruvian policymakers were happy to copy Chile’s private pension provider model: if the Chileans, who we secretly envy and aspire to emulate, think this is a good idea, then we should think so, too. (Only, maybe, it wasn’t such a good idea.) Discussions about education reform in the UK are peppered with references to the successes of the US or Scandinavian models. (Brits are secretly in love with Americans and openly in love with Scandinavians.) Only to their successes, of course. Failures are never included in the op-eds or TED talks used to argue for reform.<\/p>\n But positive experiments to refer to require someone to try them out first. Someone has to develop the idea, test it, learn from it, scale it, record it and promote it elsewhere. Traditionally, this has been the role of government, with researchers and think tanks playing minor, supporting roles – developing the ideas and concepts, nudging, informing, responding, challenging, pointing out mistakes, sharing successes and lots of other small interventions that, collectively, may make a small difference but never fundamentally resolve an issue.<\/p>\n This is because generating positive experiments involves taking on a big political risk – by their nature, you might end up with many failures before arriving at a success. The costs could be political, social and economic: incorporating changes to the school curricula could alienate a party\u2019s traditional constituency; a mistake in the delivery of new protocols for water safety could lead to unintended illness by poisoning and, rightly, to criminal charges and the figurative death of a few political careers – if not to the death of individuals; and the introduction of a new system to simplify SME registration could lead to unexpected backlogs and costs -which could cost jobs.<\/p>\n So encouraging governments to take on a policy innovation, test it, scale it and test it again has always been hard. And it should be!<\/p>\n RCTs offer funders a very powerful tool to generate these positive experiences and avoid many of the barriers that governments face. An RCT can be done by researchers with some or no collaboration from government at a manageable scale. Positive results could be scaled beyond the country by the funders\u2019 and the researchers\u2019 networks -even if the governments involved in the pilots themselves did not scale up the experiences.<\/p>\n Governments can remain free from the backlash of failed experiments. And funders and researchers can claim that their interventions are based on evidence – even if they lack political legitimacy.<\/p>\n In essence it is a partial privatisation\u00a0of policymaking.<\/p>\n But this new context raises two fundamentally moral questions: who is responsible for a private intervention gone wrong and how can they be held accountable?<\/p>\n This is no longer the traditional relationship in which a \u201cresearcher provided a policy idea and the policymaker was free to act on it\u201d. In this old relationship, the researcher could remain (and was right to) at arms-length of the implementation of the idea. Researchers could even stay clear of the welfare implications of a political decision. Sure, their recommendation may negatively affect low income families, but the decision to actually do so was not the researcher\u2019s to make. It was the policymaker who chose, freely, to adopt the recommendations.<\/p>\n This is a new relationship in which the researcher adopts part of the role of the policymaker. Critically, not as a consultant, who merely delivers what the policymaker requests via well defined terms of reference and through a contractual relationship that would normally involve the consultant securing liability insurance for when things go wrong and no autonomy to decide if the intervention goes, is halted or stopped.<\/p>\n What we now have is a new relationship in which the researchers retain a significant level of agency and have been empowered, by their funders or by willing policymakers, to make choices about the public and their welfare.<\/p>\n If things go well, researchers will be quick to claim success. Well, equally, when things go wrong they should quickly accept responsibility.<\/p>\n They may think twice about it, though. When things go wrong in public interventions people suffer. They lose power. They may lose income, or see their health affected, or they may experience a disruption to a service they depend on.<\/p>\n The recent case of Evidence Action brings this discussion to life. Evidence Action had an idea: seasonal migration could increase the income of rural families in Bangladesh. A small grant could help families take the first step and send a family member to a nearby town or city. They would help raise the income of their families and after a positive experience the process would continue in the future without the need of a grant. This may also help discourage permanent rural to urban migration. This idea was backed by some evidence but it had to be tested on the ground.<\/p>\n There were expected risks. When this idea was presented to the Peruvian Ministry of Development in 2018 as a successful intervention (although it was still being tested in Bangladesh) which the government could apply in the rural parts of the Amazon region, local researchers were quick to shoot it down. This would certainly involve, among other things, the risk of family\/community breakup and women and children (who would no doubt be among the migrants) falling victims of human trafficking. The most vulnerable in their communities could very well be put at risk of death. In their opinion this was a risk too high to accept. The government agreed.<\/p>\n In their explanation of why the project was closed, Evidence Action acknowledges that one of the risks of the intervention was that families would choose minors to migrate in search of work. According to a recent statement,<\/a> they considered that the risk could be minimised – yet not eliminated. I am sure they did their best to avoid the encouragement of underage migrants but they knew that this was impossible to guarantee. They calculated, though, that the benefits would outweigh the costs.<\/p>\n Unfortunately,<\/p>\n in January 2019, one of the known risks of the program came to bear in a tragic manner: an overloaded truck fell onto a temporary shelter in which several seasonal migrants who had migrated to work at a brick kiln were sleeping, killing 13 individuals, five of whom were affiliated with households that participated in No Lean Season. Moreover, four of the five were underage males aged 15-17. We were deeply saddened by this incident and the implications for these five No Lean Season households.<\/p><\/blockquote>\nThe story really is about the roles of researchers and their accountability<\/h2>\n