Perspectives on the Peer Review Pilot

7 July 2014
SERIES Peer reviews for think tanks 8 items

[Editor’s note: This is the seventh of a series of post on a peer review mechanism pilot for think tanks. It has been edited by Andrea Ordoñez as part of the Guest Editor initiative launched by On Think Tanks last year. If you are interested in being a Guest Editor please get in touch.]

In the previous two posts by Horacio Vera and Patricia Ames in this series we have learned from the participants in the process what it meant for their personal and institutional capacities. These first hand reflections on the matter complement the evaluation process whereby authors and reviewers assessed the pilot. We had questions that were substantive as well as more logistical. Here are some of the main points gathered from the participants:

  • The process is useful – Most authors found that the reviews they received were clear and useful and they included the suggested changes. Probably, in many cases this process complements the processes that think tanks carry out internally. But we all know how think tanks day-to-day life goes and how getting involved with our colleagues’ work is something we sometimes just do not have time to do. The system is schematic, with comments on: literature review, relevance and the constancy of the arguments, and this might have helped reviewers to provide meaningful comments.
  • More knowledge does not always translate into better comments – One interesting thing we did was ask the authors to grade the revisions that they received. Each had two revisions. Authors were asked how knowledgeable in their field each reviewer was and how useful their comments were. Interesting, the reviewers that were regarded as highly knowledgeable were not necessarily the ones with the most useful comments. In many cases, the authors significantly valued the comments of someone who was, according to their opinion, not as ‘expert’. This is a valuable lesson on how to match reviewers with authors. Experts can have incredibly useful insights on the specificities of the work carried out, of course. But there might also be a valuable space for more ‘generalists’ that can review a paper within a wider context of development or policy debate, assessing consistency of the argument and other broader concerns.
  • Reviewers need a good amount of time – As one can imagine, getting reviewers to commit their time is not simple. In our pilot, reviewers had two weeks to carry out the revision. In the evaluation, reviewers were asked to consider how much time they should have. Although the responses varied, it is safe to say that 30 days, twice as long as our original pilot is a better time frame to fulfil the expectations of most reviewers. This is a very important point to keep in mind for pieces of research that are time sensitive, which is the case for various think tank products.
  • Monetary incentives are good, but not enough. I knew that having reviewers participate was not going to be easy. We wanted to test two options: voluntary participation and paid participation. Therefore, we included a monetary incentive for external reviewers but asked for voluntary participation for reviewers of the participating think tanks. In the case of external reviewers, the monetary incentive was not enough to attract various researchers who declined our invitation. Although many were interested, they just did not have the time. It seems to me that monetary incentives are good in terms of efficiency: once reviewers accepted the invitation, they were, for the most part, timely with their work (there is, however, research suggesting that monetary incentives have a negative effect on reviewers). But it is not enough: recognition, and the opportunity to be part of a ‘community’ are some of the incentives that reviewers would have liked to see. In the case of internal reviewers, their participation was much more volatile, they took longer to accept the invitation, and once they accepted, most took longer than the agreed upon time. Furthermore, it was harder to match the expertise between reviewers and papers. In some cases we had researchers who volunteered to review but whose expertise did not match any papers in the process. At the same time, for some topics we were not able to find reviewers within the network of think tanks. Given how time-consuming this process became, I suggest not repeating it.

With all these inputs on the process, the following post will summarise some of the most significant aspects to keep in mind for think tanks interested in peer review processes.