[This article was originally published in the On Think Tanks 2017 Annual Review. ]
As researchers we care deeply about the credibility of the work we do. One way we embrace this is through a meticulous attention to quality in our work. We are careful about controlling for confounders and bias. We triangulate using multiple sources. We document everything to allow for future replication and meta-research. After all, we are serious researchers who approach things scientifically.
But how do we judge this hard work?
Across disciplines – be they social or natural – research evaluation begins with peer-review. Put simply: we ask a colleague for their opinion. Even though this opinion is typically qualified as expert and unbiased, the result remains opinion. Very rarely is empirical evidence gathered or assessed.
Following peer-review, the quality of research is increasingly being determined by analytic ‘metrics’ such as bibliometrics and scientometrics, both of which include forms of academic citation analysis, or altmetrics, which are based largely on social media attention. Whichever of these metrics is used, however, the result is essentially a proxy indicator of the popularity of a publication. These measures tell us very little about the importance of the research topic we chose to tackle, or the scientific rigour our work demonstrated, let alone whether our findings influenced policy or practice. Whether they made an impact in society.
This current view of quality is problematic because it has an impact on decisions about what (and who) gets valued, communicated and funded. These approaches are not wrong per se, they are insufficient. It is time to advance a more holistic and systematic means of evaluating research quality.
A way forward
I work within the Policy and Evaluation team at the International Development Research Centre (IDRC). We are a Canadian institution that supports research across the Global South, and we care deeply about the credibility of this work. In our view, credible research underpins a prosperous future.
Inspired by stories from our diverse research community, we undertook to capture a new view of what it means to produce credible research. We asked ourselves: why are some research organisations more valued than others in terms of peer-review and metrics?
To unpack this issue, we worked with our research partners and with colleagues Zenda Ofir (Independent Evaluator and Honorary Professor at the Stellenbosch University) and Thomas Schwandt (Professor at the University of Illinois at Urbana-Champaign). What resulted is a novel method of research evaluation we call Research Quality Plus, or RQ+. RQ+ has shown us that a more holistic and scientific approach to research quality determination is both feasible and essential. Below, I outline the core components and how they embrace three fundamental developments. You can read more about what RQ+ is, how we used it, and how it might be used in other settings in English, Spanish, or French.
RQ+ suggests three essential criteria:
- Accept a multi-dimensional view of quality in research. Scientific rigour is likely a non-negotiable, but concepts of quality should include other values and objectives that matter to our institutions. For IDRC, these are exemplified in figure 1. For other funders, think tanks, journals and universities, these dimensions may be very different. This is a good thing.
- Take into account the context in which research happens. The predominant forms of research quality assessment tend to isolate research from its environment. But there is much to learn by considering research within varying political, organisational, disciplinary and data settings. Doing so reinforces good systems thinking.
- As with the research we conduct, our judgement of quality must be underpinned by empirical evidence. Not just opinion. With this in mind, go out and ask the intended users of a research project for their insights, and balance these against the voice of beneficiary communities, other researchers in the same field, and the bibliometrics.
Time to act
We continue to develop the concept with key research partners. For example, in late 2017, we worked in collaboration with the Sustainable Development Policy Institute, a think tank based in Islamabad, Pakistan, to look at how the RQ+ approach might support and advance the research credibility agenda for think tanks in the South Asian region. The ideas and opportunities generated as part of this process were deeply inspiring.
We encourage think tanks, researchers, and funders to join us in re-thinking our approaches to conceptualising and evaluating quality and credibility. RQ+ presents a practical starting point, and we hope that it is tailored, tested, and improved by others.
When it comes to improving research credibility, the good news is that the solution to the challenge involves researchers doing exactly what they do best: innovating and experimenting.