The Wiley Network

Usefulness: An Essential Area for Better Peer Review

usefulness-an-essential-area-for-better-peer-review

Chris Graf, Former Director of Research Integrity in the Open Research team at Wiley.

March 27, 2019

“Peer review is useful when it benefits all stakeholders. It means providing constructive feedback to authors so that they can improve the clarity and accuracy of their research article and report their work in the best possible way. It means providing reviewers with concise and easily accessible guidance on assessing papers.” This is the definition for “Usefulness” that a group of talented Wiley colleagues describe as one of five “essential areas” for better peer review in our work “What does better peer review look like?”

We published this in the peer-reviewed journal Learned Publishing in January 2019 (after we first shared a longer version in a preprint). This post explores that essential area, “Usefulness,” and reflects recommendations from our article with the help of Anne Borcherds, Managing Editor for the journal Terra Nova, and Liz Caloi, Associate Managing Editor for the Royal Meteorological Society journals at Wiley.

Our goal for the whole “Better Peer Review” project is to enable journal teams to identify areas where their practice is great, and areas where they may want to make improvements. We asked Anne and Liz the questions, below, about how to make peer review more useful. And they shared their insights.

Q. How do you think peer review can be made more useful?

Anne: Better peer review means treating authors respectfully. They’ve poured so much effort and time into the work that underpins their article. They have a massive amount invested in their paper, professionally, probably emotionally, and certainly in terms of the time they’ve spent on it. We need to let them know that we do care about their article just as much as they do. For example, if we rely too much on automated procedures and template emails, then authors might wonder whether anyone at the journal has even read their paper.

Liz: The “What does better peer review look like?” article already highlights what I think is essential for peer review to be useful: clarity in guidelines, constructive feedback that helps authors improve their research and writing style, and of course, showing reviewers appreciation for their valuable work. In relation to this particular point, I think there are many things journals might do to improve what they do for their best reviewers. For example, they might invite them to share their experience with the journal’s board of editors and the journal’s community of authors, perhaps via blog posts, or webinars. I’m also a keen supporter of peer reviewers’ comments that are constructive, and I believe this could be improved by better reviewer scoresheets. For example, instead of having tick boxes and only one free-text comment box, the insertion of written comments could be designed to answer some specific questions, with a minimum amount of words. Last, I would encourage editorial offices and editors to step back, conduct a review of reviews, and act on what they learn.

Q. What are your top priorities for improving usefulness?

Anne: For me, it’s editor engagement. In the traditional peer-review model, the editors are the decision makers and also (usually) the people who communicate that decision to the authors. This is usually achieved with clear comments about the review process in the decision letter. I submitted a case study (published in the preprint) about disparate reviews because I have seen situations where one reviewer recommends rejection and another recommends minor revision, after which a request for major revision is sent to the authors without context for that decision in the decision letter. In this situation it’s helpful for authors who are asked to revise their paper to be given very clear guidance about exactly what they need to do. This is the editor’s responsibility. We need to understand that it is OK (even desirable) for editors to comment on reviews, to tell authors to disregard comments that are unfair or too picky (for example), or to add comments. We should also be looking at how well the article fits the journal, so we should also comment on how to improve the article’s appeal to the journal’s readership (most reviewers won’t comment on that).

Liz: Yes, and we should gather authors’ feedback on the current system, journal by journal and not globally (because every journal has different needs). And then we should update the review system based on the authors’ assessment. I’d also suggest that journals should focus on revising scoresheets to make them more constructive and user-friendly (where by “user” I mean both “reviewer” and “author”). Journals should provide reviewers with reviewing guidelines via invitation emails and should perhaps conduct a yearly audit of reviewer activity and performance. To make this work, journals should adopt a better reviewer scoring system, and should review reviewer reports before sending them to authors.

Q. How do you think our article can help journals to deliver better peer review?

Anne: I think the breadth of the article is a real asset, especially in terms of improving editor engagement. Most editors don’t think beyond their own journals, and therefore they model their actions and procedures on what that journal has done before, or on their experience of submitting to/reviewing for other journals in their own field. It can be really helpful to get a broader view of how journals operate in other disciplines. There are aspects of peer review that don’t cross subject boundaries, but there are far more that do.

Thank you, Anne and Liz.

Journal team members who are interested in making their peer review processes better can read our article, published under a Creative Commons license by Learned Publishing. We have created a Better Peer Review Self-Assessment tool, derived from this work.

With our Better Peer Review Self-Assessment, you can record your reflections on your current practices, and you can plan new directions. You will receive an immediate summary of your results, then we’ll follow up with your total score by quartile, your Better Peer Review Badge, and Data Visualization. And there’s more: Our Better Peer Review Self-Assessment works if you’re based inside Wiley or outside Wiley at one of the many societies and associations for which we’re proud to publish. Ask your Wiley publisher for more information! We hope you find the whole Better Peer Review experience useful, and that you’re able to identify areas for new directions and improvements to peer review at your journal.

Related Articles