How Do New Peer Review and Publishing Models Impact Research Quality?

March 27, 2019 Chris Graf

Research publishing is changing, and quickly. New models of peer review are emerging and now coexist, with new business models to support them (like the Projekt DEAL and Wiley announcement reported here). This means that complex choices characterize even the most traditional and conservative things in research publishing, like peer review. So how can researchers and publishers in collaboration, maintain essential aspects of quality – namely integrity and ethics? And, how can publishers deliver better peer review and find new sources of value for researchers.

Two Essential Aspects of Quality: Integrity and Ethics

Peer review, as part of the peer reviewed publishing process, is how we manage two essential aspects of quality in research and research publishing: Integrity, and ethics. By integrity, we mean the reliability, reproducibility, trustworthiness, and usefulness of published research. By ethics, we mean the regulated ethical requirements for doing research (human and animal research in particular), as well as equally important community-led obligations (like authorship practices), and how these are reflected and reported in published research.

How Are We Doing With That?

Often, people look to retractions as a marker for quality in research integrity and publishing ethics. Retractions, for the uninitiated, are formal withdrawals of research articles, published when something is significantly wrong with the integrity or the ethics of a piece of research. Retractions can be for honest errors, for research misconduct, or for something in-between. Jeffrey Brainard published an analysis of records from the world’s largest retraction database, titled rethinking retractions in Science. While Brainard reports that the numbers of retractions grew ten-fold in the years between 2000 and 2014, he also reminds us that the total number is actually low (maybe 4 in every 10,000 articles published) and the number of articles published is also growing (doubling over a similar period). About 40% of the retractions Brainard studied reported honest errors, problems with reproducibility, and other issues. The remainder were for our “something in-between” questionable research practices, or for misconduct. Brainard quotes Nick Steneck (University of Michigan in Ann Arbor) saying “Retractions have increased because editorial practices are improving, and journals are trying to encourage editors to take retractions seriously.”

Chris Graf discusses Jeff Brainard’s study of retractions, Rethinking Retractions, in Science http://science.sciencemag.org/content/362/6413/390. The image is from Twitter @MatthewAHayes1 https://twitter.com/MatthewAHayes1/status/1085538549141815298

We Have Evolving and Completely New Peer-Reviewed Publishing Models

Retractions are, Brainard, Steneck and many of us would argue, a sign of “quality” in the research publishing process. They’re published when research publishers, using their traditional peer-reviewed publishing processes, curate (per their promise to the world) the research they publish to ensure it is as reliable as it can be. Retractions give us a sign that publishers are working with researchers when problems arise, either with integrity or with ethics (or both), to address those problems in a robust and increasingly transparent way.

But they’re governed by our traditional peer-reviewed publishing process which is quickly evolving, with completely new peer review models emerging.

Author-Mediated Peer Review

“Author-mediated peer review” is one quite profound evolution, akin to discussions for many years about post-publication peer review. Welcome Open Research is a research publishing platform maintained by the Wellcome Trust. Authors submit their work to, and after rapid quality checks and screening, including for our essential integrity and ethics qualities, the author’s research is published immediately. After publication, the author is incentivized to get their work peer reviewed (for example, only work that is peer reviewed is then indexed in PubMed Central and Europe PubMed Central). They then pick and invite the reviewers. If the author fails to get their work peer reviewed and with a positive outcome, then it likely sits on the platform (and probably doesn’t ever get read). For the reviewed papers, the authors choose whether or not they address any points raised by the peer reviewers. Authors are totally in charge. This is a profound evolution: Publication, and then possible peer review. By doing this, Wellcome Open Research (and others adopting this approach, like F1000 Research) have re-imagined the traditional processes we’re used to relying on to govern quality.

Community-Mediated Peer Review

“Community-mediated peer review” takes things one step further into new territory. Right now, researchers can post their manuscripts to “preprint servers” where they can (almost immediately) create a permanent published, but not peer reviewed, record of their work. They might then choose to submit their work to a traditional journal, for peer-reviewed publication. A preprint server called arXiv is the world’s most established, and has been publishing actively for many years in physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics. Many other preprint servers are emerging, often designed to serve research disciplines or sub-disciplines. A good example is bioRxiv, the preprint server for biology. In general, preprint servers are seeing “hockey stick-like” growth in their use (albeit in relatively small absolute numbers compared with the say 200,000 peer reviewed journal articles that are published every month in traditional journals). Reputable preprint servers do check preprints before publishing them (bioRvix says “all articles undergo a basic screening process for offensive and/or non-scientific content and for material that might pose a health or biosecurity risk and are checked for plagiarism”). They don’t do peer review. But they may enable the communities of researchers that use preprint servers to assemble, peer review and offer comments on preprints, thereby ensuring quality – including integrity and ethics.

Publisher- and Editor-Mediated

“Publisher- and editor-mediated” peer review, the traditional model that most journals use and that governs quality for most research articles, is not immune to changes. Look at the scale that some new journals are achieving (let’s take Nature Communications and PLOS One as examples of general journals publishing many thousands of articles per year; and Ecology and Evolution and Cancer Medicine as examples of specialist journals publishing many hundreds of articles per year). Each of these journals has achieved new kinds of scale measured by the numbers of articles they are peer reviewing and publishing. And each has updated its traditional editorial team and processes to handle that kind of scale. But each still uses a pretty traditional model for peer review and governs integrity and ethics in the ways we’re used to.

We Need to Ask Ourselves: Who Looks After Quality Now?

So, with evolving and completely new peer review models, we do need to ask ourselves: Who looks after ethics and integrity – the most essential aspects of quality – now? And are we happy with how they’re doing it?

We’ll find out. On January 16, 2019, in Berlin at the APE Conference, Dr. Elisa De Ranieri, Editor-in-Chief of Nature Communications, Dr. Diego Benedict Baptista, Open Research Coordinator, Wellcome Trust and Dr. John R. Inglis, Executive Director of Cold Spring Harbor Laboratory, Cold Spring Harbor (publishers of bioRxiv) told us how they do it, in a panel session moderated by Chris Graf, Director of, Research Integrity and Publishing Ethics at the research publisher Wiley (and Co-Chair, COPE, the Committee on Publication Ethics). If you couldn’t be there, check APE2019 and #acadAPE  on Twitter and look for the official recordings on YouTube.

About the Author

Chris Graf

Director, Research Integrity and Publishing Ethics at Wiley // Director, Research Integrity and Publishing Ethics at Wiley, and is Co-Chair of COPE, Committee on Publication Ethics (an elected and voluntary position for which he will serve a 2-year term). Chris leads initiatives at Wiley that focus on transparency, research integrity and publishing ethics.

More Content by Chris Graf
Previous Article
Powering Up: How the Research Data Alliance Is Making Open Research a Reality
Powering Up: How the Research Data Alliance Is Making Open Research a Reality

Learn how the Research Data Alliance is enabling data sharing to empower researchers.

Next Article
A Scalable Solution for Transparent Peer Review
A Scalable Solution for Transparent Peer Review

Discover more about a pilot on transparent peer review workflow Wiley launched in September 2018.