To celebrate Peer Review Week, representatives from ORCID, ScienceOpen, Sense About Science, and Wiley got together (virtually) to discuss why peer review matters - to them and to their organizations - and what they hope this week will achieve.
Participants: Stephanie Dawson (CEO, ScienceOpen), Chris Graf (Wiley), Peter Gregory (Wiley), Laure Haak (ORCID), Emily Jesper (Sense about Science)
Stephanie Dawson (SD): When I started off as a journal manager at De Gruyter publishers, I used to receive three copies of each manuscript, which I put in envelopes and sent to Japan or Argentina by post and hoped that the chosen peer reviewer would have time to review. Much has changed since then and much not. Some reviewers did an extremely conscientious job in helping authors improve their papers, others sent a rejection in three lines. I really saw firsthand how heterogeneous the process could be and wished I could personally thank each reviewer who put in the effort to really help their colleagues.
With the research and publishing network ScienceOpen, one of our distinct roles besides a Gold Open Access (OA) publisher and content aggregator, is that of a peer review reformer. Our goal is to augment trust in the peer review process by making it entirely transparent. Then everyone can see (and even thank) those reviewers who are doing an exceptional job to improve the quality of scientific communication.
We’ve deployed a novel workflow to demonstrate the efficacy of a different approach to Peer Review and its suitability for the digital future of scholarly communication which we believe will need to speed up and involve the publication of many more digital objects than the current single article unit.
Chris Graf (CG): Speaking of novel workflows, I’m personally interested in how pre- and post-publication peer review in combination will make a better system.
Pre-publication peer review validates and organizes research, and in the main it does a great job. But – let’s be honest – sometimes it goes spectacularly wrong. And that’s where post-publication peer review comes in.
Post-publication peer review can help contextualize, curate, and – excitingly for me – perhaps also gives a more accessible entry point for readers (eg, comments in PubPeer, PubMed Commons, metrics and more in Kudos, provide a steer for readers and help surface articles). And – yes – post-publication peer review weeds out the occasional bad apple. Which is important.
Last, we need to stop and think – and have a public discussion – about whether we should view a peer reviewed article as absolute, definitive, final and 100% correct just because it’s peer reviewed. Might it be more realistic to think about a peer reviewed article as one step on a (fairly tortuous) research journey, upon which we can expect to make missteps, take wrong turns, stumble, and take new directions before we reach our final destination: The Answer. Many steps make up that journey, and many pieces of research make the evidence we use to inform practice and policy.
Laure Haak (LH): I completely agree with your last point but, in terms of validation, is that really what peer review does? Is it supposed to be the filter that says, this paper used the right methodology and statistics (in the case of a science paper) or are we really looking for the peer reviewer to assess the logical flow of the paper? At the end of the day, whose responsibilty is it if the paper is found to be "fraudulent"? Can we really expect/do we want that weight to be on the reviewer?
Emily Jesper (EJ): And to Chris’s point about peer review’s role in informing practice and policy, of course peer review is not just something of significance to scientific researchers. Because it indicates that research has been scrutinized by independent experts in the field, peer review is an important consideration for policy makers, reporters and the public when weighing up research claims and debates about science. Understanding the status of research claims is empowering. It helps us weigh up claims and use evidence to make decisions. Since Sense About Science was set up in 2002, we have been working to popularize an understanding of peer review more widely. Our public guide to peer review I Don’t Know What to Believe encourages people to ask “Is it peer reviewed?” when weighing up claims about science.
Peter Gregory (PSG): Although you’re right that peer review does have a wider significance, it matters to Wiley primarily because it matters to the scholarly community which we serve. In addition to gathering opinion and improving contributions peer review also provides, especially in the sciences and medicine, a safety valve, preventing the distribution of incorrect or even dangerous erroneous articles.
LH: Yes, peer review is a core component of scholarly practice. It is an established set of methods to gather commentary on scholarly works (papers, books, grants, programs, and more) from peers -- experts in the field -- with the end goal of improving the work. ORCID is interested in peer review because people are involved, both creators of works and the reviewers. We are working with the community to develop digital methods to acknowledge the contributions of reviewers by citing peer review activities. Through this we are hoping to encourage scholars to participate as reviewers, and also to support others who are working on issues of trust in the review process.
SD: I love the idea of making ORCID the place where researchers can log their reviewing activities. ScienceOpen is trying to address this with open reviews that have a registered CrossRef DOI. Publons is also doing a nice job of this as well as supporting reviewers in blind peer review journals. In cases of blind review they are verifying with publishers that a reviewer has indeed done the work that they claim – right now it would be easy for a researcher to add “review for Nature and Cell” to their CV because there is no way to verify this in our current blind review system. Again trust and transparency are key.
PSG: I hope that this Peer Review Week will help intensify discussion of peer review and start sensitizing participants (publishers, authors, editors, referees, funders etc) to the differences in the ways peer review is applied and is useful in different communities. Some areas collect opinions, some are more fact based; some benefit from preprint circulation (prior to peer review and publication I mean) some are damaged by it; for some post publication review is innovative, for some it is dangerous. This leads me to the conclusion that those working to improve/change peer review should carefully consider that one size does not fit all.
SD: I agree that there will not be a “one size fits all” solution but different communities can also learn from each other, especially as an increase in interdisciplinary research draws them together. One great example is the life sciences preprint server BioRxiv which builds on the physics community’s positive preprint experience with arXiv. In the new field of bioinformatics, the computer scientists just put their preprints on arXiv as a matter of fact, and the researchers from the life sciences asked “what’s that?” So something really interesting may grow up just at those points of friction. Also the new trend towards megajournals such as PLoS One, SpringerOpen, Scientific Reports, requires rethinking peer review to fit a wide range of communities and all of those journals have come down to asking reviewers to just review the scientific soundness of results. So this may give us some clues to what the common denominator of peer review may be moving forward.
LH: The idea of a common denominator is really key and is exactly the sort of discussion I hope Peer Review Week will generate – stimulating and concentrating the ongoing conversation on peer review and, in so doing, bringing forth what is being done in practice to support improvements.
To read more of this discussion, visit the ORCID blog.
Don't forget to join the conversation online using #peerrevwk15.
About the AuthorMore Content by Alice Meadows