[Editor’s Note: This post was written by Patricia Ames, Research Director at the Institute of Peruvian Studies. It is part of the series devoted to discussing peer review processes for think tanks. It has been edited by Andrea Ordóñez as part of the Guest Editor initiative launched by On Think Tanks last year.]
Peer review processes in its current form had been popularized by academic journals in recent years but the underlying ideas that fuel them are in place for a longer time. Indeed, review and critique by other colleagues is part of the process of creating knowledge in the social sciences. Perhaps in the past such processes were more informal than now but they were in place nonetheless.
The experience of my own institution points in that direction: the Institute of Peruvian Studies (Instituto de Estudios Peruanos – IEP) had from its birth 50 years ago, several mechanisms to fuel discussion of ideas, research approaches and results. The best known mechanism was the mesa verde (green table, because of the color of the tablecloth). It was a space that gathered researchers, policy makers, practitioners and activists – depending on the topic – to discuss different types of products: research proposals, research papers, books, policy papers (even public policies!). This oral exchange allowed the researcher to improve his or her final product. Other mechanisms, more similar to current process of peer review were also in place: researchers asked their colleagues at the Institute to read a paper or work in progress and provide opinions, suggestion and even guidance in the case of junior scholars.
With the growth in size of the Institute, although these mechanisms have not disappeared at all, more formal ways of peer review have been introduced at various stages. Thus for internal research competitions we carry out a review process of proposals to determine the winners and to provide feedback. We have gradually moved from internal to external peer review of finalized research products. Sometimes we combine both, as each one has its strengths and weaknesses.
In the last year we have been using double blinded peer review aimed to increase the quality of our main outputs. The opportunity to be part of the TTI pilot on peer review allowed this process to be more international in scope. Although I would focus in this last experience, the wider peer review processes I regularly manage at IEP is inevitable part of my reflections.
It is helpful!
I have seen various reactions to peer review process, so let´s start with the bright side: the positive reactions of those that found really good feedback and useful suggestions by reviewers. Indeed, having a second eye on one’s work, and I should add an expert eye, is an exciting opportunity to get advice on how to strength the research output.
Some researchers acknowledge that the weaknesses identified by the review were indeed the same they themselves were aware of (but constrains of time did not allow them to work more on this, for example). Also some strength was pointed out by reviewers that authors were not always aware of. The possibility to have international readers also makes this exercise interesting in what can be share beyond borders. Think tanks’ credibility depends to a great extent on the academic rigor of the research that backs up its policy recommendations, thus having stronger research outputs helps to maintain and increase think thank’ reputation.
It is time-consuming: putting some time aside
Think tanks usually work in a tighter schedule than universities or other research centres. Reports need to be ready sooner rather than later and one of the difficulties with the peer review process is it takes its time: time for the reviewer to read and comment, time for the author to introduce such comments, and them time for making the final product available to the general or specific public.
Most of the projects and consultancies formally end once the final report is handed. Thus, making the extra time for improving manuscripts is not always easy since researchers may be embarked in a new project. One way out of this dilema is to start including time for such processes in our research designs, whilst at the same time finding mechanisms to speed review processes. This is particularly important in bigger institutions, where a centralized system may be too slow but a decentralized alternative may be encouraged (i.e. each project look for internal and/or external readers and feedback).
Not all comments welcome: how to handle bad reviews
Reviews come in many forms: some positive, some negative, some with detailed and useful suggestion on how to solve problems, others with harsh evaluations and little advice on how to proceed. Reactions to such reviews are thus varied, some find reviews helpful, and others find them not helpful at all and deeply disagree with the comments received. I think this is all more intense in think tanks as political issues and positions are at stake. When this happens we encourage authors to express and argument their disagreements. In the process some methodological and theoretical gains appear, and thus the research output still benefits beyond the controversies.
I think that for policy papers the challenge is bigger, as policy contexts are varied and detailed knowledge of each one is necessary to assess the relevance, and not only the rigor, of the output. Perhaps this is one reason think tanks are more conservative in putting such type of outputs through the peer review process. However, if reviewers are well-chosen, and these issues are kept in mind, policy papers may benefit as much as research papers from reviews.
Creating and recreating cultures of peer review
There is not only one way to discuss and improve research outputs. Current peer review practices are not the first ones, and surely won’t be the last. But as they get more spread perhaps it is necessary for think tanks to try and use them, recognizing that other forms of peer review may have flaws, may be too close or bland and thus external views can help and improve our own, ongoing understanding of complex realities. This challenge involves, without a doubt, contesting previous cultures of peer review and the need to recreate the ways we approach knowledge production.