[Editor’s note: This post was first published on Politics and Ideas. For more on Andrea’s work on research quality go to her edited series on Peer Review Mechanisms.]
Two weeks ago, the TTIX2015 took place in Istanbul and some initial reactions have already emerged. Enrique Mendizabal did an exercise of finding “the elephants in the rooms”, Vanessa Weyrauch reflected on the exchange itself and what it means for innovation among think tanks and policy makers, and Richard Darlington wrote postcards, bringing an outsiders perspective on the TTIX. I want to do something a bit different.
When I found out that TTIX would focus on research quality, I was quite excited (maybe you have seen my previous posts that focus on these issues here and at On Think Tanks). I wasn’t that worried that we were doing so much “research quality” talk, in fact, the frustrating part for me was that the conversations often diluted into other topics, and so it was hard work to summarize key takeaways on the main issue of the discussion. I took a couple of days after the conference to reflect, and here are mine:
Research Quality is taken for granted
The first session of the conference started with the debate of the definition of research quality. As others have already reported, this conversation led probably to more questions than answers as we struggled as a group to define research quality and determine the indicators for quality – is it about publishing in journals? Is it about influencing policy? We had no clear answers and this shows my main concern: we overlook research quality.
I often I hear statements that begin with “Well, first you do good research… and then you communicate, influence, have impact…” being the second part of that sentence that get much more attention than the first.
Research quality is taken for granted, but it shouldn’t. As a perfect example I offer the many discussions we often have on poor research that has had an impact on policy. Do you really want to be the researcher that impacts with poor quality research? I am guessing not. [Editor’s note: although this could open a discussion on whether good research inevitably leads to good advice. Often, good research can only get us close to an answers but no further. Some problems need to be solved with more hunch than science.]
This was also exemplified by the fact that some speakers used the expression of “research value chain”, depicting a linear process, starting with setting questions, selecting method, collecting data, analysing, communicating, influencing, impacting…” But I do not think this is a linear process. Especially when planning to influence policy, one shouldn’t wait until the end of a project to fix our research and analysis into the context and debate.
The insider’s and outsider’s perspective on quality (and impact) are different
When discussing quality there are two approaches one can take.
- As a researcher, research team or centre, one can ask: “What can I (we) do better to improve our research?”
- As a donor, policymaker, or user the question may be “How do I know that what they do is good enough, trustworthy or credible?”
These debates were intertwined throughout the conference, but I think these are slightly different questions and demand different treatments to answer.
Somehow, however, the key debate at the conference centred on whether policy impact should be a marker for research quality or not.
The researcher’s perspective
Does this mean that, as a researcher trying improve the quality of one’s research, should you aim at influencing policy above all? I do not think so. In fact, doing so underestimates the complexity of the policy process and the different actors that are involved in it.
Does this mean that researchers shouldn’t take policy influence into account? Well, no -fortunately this is not a one or the other kind of situation. I think that researchers trying to be part of the policy debate must be very clear about this objective, but shouldn’t judge the quality of their work on whether a policymaker takes on the recommendations or not at the end of the day.
In fact, influencing policymaking might take a long time (and it is rare that we can attribute it to a sole piece of work, and may also happen in many various forms, different from just a linear uptake of a set of recommendations).
A new approach to research project design
For me, striking this balance means effectively incorporating into the research process, analytical tools to understand the context, the politics, and policy processes so as to reinforce the interaction between the problem to be solved and the type of analysis, recommendations or implications that research can contribute with. We need to develop concrete tactics to do this alongside the traditional research process, and not just as an afterthought when the research is finished.
Some researchers have this implicit knowledge and they are very good at defining relevant questions, finding windows of opportunity, and ultimately producing research that is better fit for purpose. We could all benefit from codifying this knowledge and tools. I think we have said context is important enough by now. Now we need to figure out what to do about it in practice (if interested, please read these first ideas here).
I think this is what we can honestly do. The other options, such as telling policymakers what they want to hear, or modifying our results or findings, shouldn’t be an option.
The funders’ and users’ perspective
From an outsider´s perspective, is influencing policy a marker of quality? Again, no. As I have said above, impact is a complex business, and although, ideally good research gets to influence policy, this is not always the case. Sometimes good research doesn’t get the attention its researchers think it deserves. Sometimes, it is just a matter of “its time hasn’t come yet”; and other ties its just that its intended users biased against its authors.
Furthermore, there are cases were bad research gets a lot of attention (think about climate change spurious research). I have no answer for thess questions except knowing that assessing research quality is risky business for reviewers, donors, policymakers.
From the policymakers’ panels at the TTIX2015 we got a glimpse of their methods to assess research quality (as imperfect as that can be). We learned, quite clearly, that if they are not listening to you, it’s not always that they do not understand or that they do not care. It is not that they are dumb or cannot read.
It might be that they do not consider it worth their time.
Who, then, is responsible for research quality?
This questions sparked some attention in the debates and there were some takeaways for each: while some alluded to the responsibility of the think tanks and to the broader society, it was not forgotten than at the end of the day, being a good researcher is also a personal decision.
Here are some thoughts on these three levels of responsibility:
- Individual – I like to stress this point to researchers all the time: it is our name on that paper! My main takeaway regarding the level of individual responsibility was the intellectual integrity that some speakers alluded to. It is key to be honest [or transparent] about our own believes, our capacities, our level of expertise and experience, and our objectives. This sounds easy, but in practice it might be difficult and costly. Furthermore, the institutional demands might push researchers in other directions.
- Institutional – It was constantly emphasised that institutional commitment is crucial to ensure quality of research. If a centre is committed to high quality, it might be able to set-out incentives or review mechanisms in place to promote higher quality. But these, of course, have a limit, as the wider research ecosystem of a country will have a strong impact on quality, on the salaries for researchers, and how competitive think tanks are as places to work in. The panels on self-assessment based onOrganisational Capacity Building and on peer review systems shed some light on these institutional challenges.
- Social – Finally, research quality is a wider issue. Is it possible to deliver high quality research in settings where research is not valued? Can there be ‘islands’ of excellence in a non-conductive environment? As much as my idealist self would hope for this possibility, in practice it seems that is not the case. The interventions in the conference pointed to the limits of our efforts of sustaining quality if there is not a functioning ecosystem where good research is praised and bad research is denounced and that catalyses support for good research to be produced.
The life of think tanks is many-sided, juggling research, communications, influence, and management. The conference confirmed one of my worrying concerns: we need just as much work on the research dimension as in the others. We cannot just assume that we are doing good quality research.
What I missed from the TTIX
As said before, I would have liked the discussion to be more focused and centred on research quality.
To accomplish this, it would have been useful to have undertaken preliminary research and produce a series of think pieces to guide the conversations. This could have helped to keep us on track and able to leave the conference with a much more concrete outcome.
One of these concrete outcomes could have been a more formal document to guide us forward, such as an ‘Istanbul Declaration for Research Quality for Think Tanks’ that could move think tanks as a community into a next level in the debate.
As in a couple more years we will have a third (and maybe last) TTIX,we should learn and work to make sure we manage to have more focused discussions and concrete outputs.