How One Researcher Is Looking to Improve Peer Review

July 15, 2019 Elizabeth Moylan

Serge Horbach is a PhD student at the Institute for Science in Society, Radboud UniversityNijmegen and guest researcher at the Centre for Science and Technology Studies at Leiden University. Serge recently visited Wiley to talk to us about his research project, Improving Peer Review (IMPER)Here, we share some glimpses into Serge’s research so far, together with his perspective on questions that provoked discussion with Wiley colleagues. 

Serge set the scene for his research into peer review by posing a number of questions: What is peer review for? What does it aim to regulate? How is peer review structured? How effective is it? He explained how he tackled these questions within his own PhD project. First off, he made an inventory of the various models of peer review, distinguishing 12 different dimensions. From there he mapped which journals use particular types or models of peer review, and how these changed over time. This turned out to be no easy task and Serge had to survey individuals to get some of the information that could have been more transparently available on journal websites. In learning from this experience, he established the Declaration on transparent editorial policies for academic journals aspects of which resonate with Wiley’s Better Peer Review projectSerge went on to assess the effectiveness of peer review by relating different peer review models to retraction rates. And finally, he explored innovations in peer review processes and how these might have come about.  

We asked Serge to share his thoughts on these questions: 

Q. What are the drivers for journals or other stakeholders to introduce or suggest changes to peer review? 

A. This is closely tied to the expectations that different actors have of the peer review system. The expectations of journal peer review have been in constant flux and still remain controversial. Some expectations are more or less universally accepted, such as the expectation to act as a filter, distinguishing ‘good’ from ‘bad’ research, and the expectation to improve a manuscript’s quality through the comments and feedback that reviewers give to authors. However, other expectations are not universal and the understanding of ‘quality’ may vary considerably. For example, fraud detection, the creation of fair and equal opportunities for all authors, and the establishment of a hierarchy of published results i.e. ranking articles based on journal characteristics (for instance using several metrics) have emerged as expectations of the peer review and editorial systems. 

The differing expectations of the review system have led to a range of novel review procedures. The expectation to detect fraud has, for instance, triggered the development of text and image similarity scannersOpen or transparent review, including sharing review reports, as well as it's radical opposite, the double-blind review system, have emerged out of questions to address bias in peer review 
 

Q. Are you seeing high rates of innovation in peer review across journals? 

A. Actuallyno. From our analysis, we see a high number of suggestions about how to improve the review system. Given the growing concerns about peer review, we might expect these innovations to disseminate quickly across the industry and be implemented by many journals. However, in our study among journal editors, we found that implementation of innovative review procedures is remarkably slow. In our sample, we only found a few journals that have made substantial changes to their review processes since the beginning of this century. The majority of the editors even claimed that they have not changed anything at all in the past 20 years. This leads to the situation where traditional forms of peer review still by far prevail over more innovative formats, such as open review, post-publication review, or review assisted by the wider community and digital tools. 
A notable exception to this are text similarity scanners, the use of which has quickly become more or less common practice. There seem to be several reasons why these scanners form a special case and are more prone to implementation than other innovations. Text similarity scanners promise a simple fix for the rather uncontested issue of plagiarism and problematic text recycling. In addition, such scanners had been implemented successfully in higher education, scanning student papers. This may have provided a testbed, allowing faster implementation in other contexts, including academic publishing.  

Q. Did you find that certain models of peer review were more effective than others in preventing retractions, and why do you think this is the case? 

A. Several review procedures show significant differences in the number of retractions associated with them. We found some of the most prominent differences when considering the level of author anonymity: blinded author identities (the double or triple blind review models) are associated with significantly fewer retractions as compared to review procedures in which author identities are disclosed (as, for instance, in the single blind or open review models). Also, review procedures that use ‘anticipated impact’ or ‘novelty’ as a selection criterion are associated with significantly more retractions  

In contrast, journals using plagiarism or statistics scanners are associated with fewer retractions compared to journals performing review without digital tools such as similarity scanners. We also found fewer retractions in journals using the pre-submission (Registered Reports) review model. However, we must add that this is a recent initiative that is still growing and therefore the sample size is relatively small. 

The reasons why these review models are associated with more or fewer retractions remain a bit speculative. Several factors, including psychological mechanisms rendering differences between single- and double-blind review, have been suggested, but most likely the differences between review models result from the complex interplay of multiple factors. More research is required to elucidate the causal mechanisms behind the correlations we found.  

Q. And what’s next for your research? 

A. We want to continue our current research, assessing how and why journals innovate their editorial practices and what the consequences of such innovations are. In addition, we have just received funding from the Netherlands organization for health research and development to create an online platform for responsible editorial policies (PREP). On the online platform, academic journal editors can receive advice about how their peer review procedures could be improved. In addition, we will provide the possibility for journals and publishers to be transparent about their editorial and peer review practices. In return, we will collect more data about the current peer review procedures of academic journals, thereby allowing more and better analyses of the effectiveness of different review models. We hope that the PREP website will grow into a knowledge platform about responsible editing of academic journals. Recommendations about improved transparency and the responsible use of publication indicators will, therefore, be included. 

Thank you, Serge. Wlook forward to hearing more as your research develops. And if editors are interested in sharing their experiences of peer review innovation feel free to contact Serge directly (s.horbach@science.ru.nl) 

 

About the Author

Elizabeth Moylan

Publisher, Wiley //

More Content by Elizabeth Moylan
Previous Article
The Courage to Fail – Why It’s Important to Embrace Negative Results
The Courage to Fail – Why It’s Important to Embrace Negative Results

The importance of inconclusive or uninteresting findings that sometimes lead to major breakthroughs.

Next Article
Wiley's New Open Access Agreement With Germany and What It Means for You
Wiley's New Open Access Agreement With Germany and What It Means for You

Watch this webinar to learn more about the new open access agreement for authors.