Lessons from Peer Reviewing among Think Tanks

9 July 2014
SERIES Peer reviews for think tanks 8 items

[Editor’s note: This is the last of a series of posts on a peer review mechanism pilot for think tanks. It has been edited by Andrea Ordoñez as part of the Guest Editor initiative launched by On Think Tanks last year. If you are interested in being a Guest Editor please get in touch.]

The peer review pilot was a space to learn about these processes; as with all exploratory tasks, I was left with many questions and also some ideas of things to do in the future. I have no recipes for making a peer review process for think tanks, but I do have some questions that should be answered before establishing the process, whether within a think tank or among various institutions. Also, in the process, other ideas on strengthening think tanks’ research capacity have emerged, which are also shared here.

Define the objectives of a peer review system

As has been discussed in a previous post, peer review processes are carried out for a variety of reasons. In this initial pilot we did not have a clear objective for the peer review process. The objective was, indeed, to test a peer review system. But in terms of the objectives of the reviews themselves, these were not clearly stated beforehand. There were conceptions I brought to the design which are worth noting:

  • I had an underlying interest in capacity development. Being that this pilot was not aimed at sanctioning research papers, the most important goal for me was to make reviews relevant for researchers and think tanks.
  • For some of the outcomes (especially the policy briefs), I also wanted to include policy influence potential among the aspects to be evaluated.

These, of course, are my own perspectives of a peer review process. But if a peer review process is established, either at a think tank level, or among think tanks, these concepts should be more clearly and broadly debated among those involved. If a think tank wants to include a peer review process, I suggest having some clarity on these key points:

  • Objective – What is the objective of having a peer review process? Be clear about what a peer review process is, and what it is not. Check out some options here. After that, keep questioning your choice: Will the peer review determine what gets publish or not? Will reviews be considered in researchers’ appraisals as a component of their performance? Will reviews be used to determining a capacity building strategy within the organization? A peer review process should be alive and evolve with the organization.
  • Research Quality – It is also critical to make sure there is some sort of consensus about what will be rated as ‘good research quality’. There are many assumptions about what this is in academia, and I think there are even more in the context of think tanks. The Think Tank Initiative’s External Evaluation summarizes the assumptions within the program’s stakeholders and might be a useful resource for reflecting where you stand on this debate. If anything, start by having this debate among researchers.
  • Impact – Traditional peer review processes do not consider the dimensions of impact or relevance for policy debates, or the quality of the recommendations. It does not have to be the case that they do assess impact in this manner, but if it is, make it clear from the beginning as reviewers might not be used to this dimension. Let them know before hand that this is an important criteria.
  • Types of research outputs – In this pilot we tried to categorize research outputs: working papers, policy briefs, book chapters, etc. After all, it is the final product which is evaluated, not the research process as a whole. These categories, however, were not as useful when it came to understanding the purpose and nature of the products. Traditional categories might not really underpin the purpose of a given output. Furthermore, the understanding of what a ‘working paper’ or a ‘policy brief’ is might differ substantially between the authors and the reviewers. In these cases, the authors could feel that the reviewer did not understand the nature of their work. It might be necessary to explore new categories of research outputs that really explain their purposes.

Explore who the ‘peers’ are

The concept of peer review is based on the concept of peers – researchers with similar competences to those of the authors. However, identifying them in an increasingly complex academic ecosystem is a challenge. Maybe in the past the limits between sciences were clearer. Now the academic world is a growing mosaic of disciplines, with much more interdisciplinary work carried out. In addition to this complexity in science, think tanks researchers face the additional layer of relating to policymakers and the wider public.

So who are the peers? Are they the ones with the same academic background? Are they the ones that know the field? Or the ones that know the national policy context? Or is it other think tank researchers that know the challenge of these interactions between research and policy? Defining this might give more clarity to reviewers and authors and increase the credibility of the process.

Let us learn more about what makes a good reviewer

Although peer review is one of the pillars of the academic world it is neither a clear subject of research or of capacity development. Have we learned how to review documents? How to analyse our own objectivity and capacity to review the work of others? I suggest that more emphasis is given to this aspect of capacity building among researchers. We shouldn’t take for granted that this is something all researchers are used to doing or are good at.

To feed this objective of developing the capacity of reviewers, we might need better conceptual and psychological frameworks as well as empirical evidence on how and why peer review processes work. There is a small but growing group of researchers interested in not only evaluating the system but also updating it in the context of a globalized academic world.

Research quality beyond the peer review system

I support peer review processes. In fact, I introduced one at the think tank I worked at before. But I know its limitations, and I am convinced that they are no silver bullet to improving the quality of research in a given institution. Although I keep advocating for the system as a key component of any effort to improve the quality of research, it might not be enough. Here are some ideas on additional activities that could be carried out to support think tanks in the global south.

  • Ad hoc peer review processes do not, and cannot, replace those of academic journals. As we have discussed, each journal and discipline has its own specificities. If one of the markers of ‘research quality’ is publishing in these journals, other strategies are needed. In this case, researchers need to be more involved in their own academic field and better understand how the journals they are aiming at work. This is almost a different knowledge from that of one’s own field. Some strategies to gain this knowledge include:
    • Events with journal editors – What I would imagine is an event where different researchers could listen directly to journal editors discuss about what they perceive to be the value added of their particular journal. There could be, for instance, round tables with journals within a similar field. The objective is to bridge the gap between those journals and researchers. Researchers in universities, particularly in the North, are much more familiar with connecting with editors and reviewers who may have also acted as their teachers or advisors.
    • Special Issues – Another way to bridge this gap maybe to promote special issues within journals, aimed at a specific area of interest or region. These special issues go through an equally rigorous peer review process, but are usually focused on a specific topic or within a specific community. There is a growing interest in including more academic voices from the global south, but more incentives are needed. Special issues are an interesting approach.
  • The peer review process does not have the ability to improve a research output if there were significant fallacies in the research process. In the case of younger research teams, it might not be enough to have a review only in the stage of the manuscript. Instead, comments throughout the research process might be of use:
    • Continuous mentoring – One technique worth noting is that of the SIRCA programme that pairs young researchers with senior mentors to accompany the entire research process. In this case the knowledge and expertise of the senior researcher is available throughout the project. This matchmaking process is not an easy task, just as is the case of reviewers, but one where the investment might have higher returns.
    • Working groups – As described in a previous post, we did find some common themes among researchers supported by the Think Tank Initiative. Maybe creating thematic working groups among researchers in those topics from these and other think tanks in the region could create a valuable community to find reviewers, co-authors, etc.

I am convinced that the debate over what research quality is among think tanks, how to measure it, and how to support it will continue to gain momentum. I am happy that the Think Tank Initiative was willing to experiment with this concept, and hope they will continue to innovate with some new ideas on how to support think tanks in improving research quality.