Let’s take an extreme example – and, in fact, “Extreme Open Science” is a term that is starting to gain recognition, as witnessed by the Open Science Forum: imagine that you have an electronic laboratory notebook that not only records results but contains experimental plans and procedures: basically, research that you intend to do.
Now imagine that that notebook is networked for others to see in its entirety! The “others” in this example are members of a consortium of researchers who have agreed to share planned and ongoing experiments and the results thereof in real time.
OK, this isn’t as “open” as it possibly could be, but still, it’s an extreme example of openness compared with most researchers’ experience to date.
Now imagine the benefits of such a system. I don’t want to be patronizing—I’m sure you’ve already thought of many advantages. I’ll focus on one for the time being, and that’s the “avoidance of unnecessary work”.
Why We Need Negative Results-Fast!
Negative results are a big problem: not because they’re produced, but because they’re concealed. That may well not be intentional, but let’s face it, the publishing landscape hardly abounds with places for publishing such material, and even if it did, potential funders or employers wouldn’t be scanning your publication list for such papers
But arguably, it’s more important to publish a negative result as quickly as possible than a positive one, and here’s why: researchers should have the ability to easily and quickly share information that may benefit others. The ideal timing for this is at the point of production of the “result” (I know that we are usually talking about much more complicated entities, but I’ll use “result” for convenience); and, of course, the benefit of informing others as to what not to try, or how not to try it, is greatest if it’s immediate. Though publishing that “paper” might be practically impossible, making the information “public” even in a group of related researchers, is a major achievement, and that could be done at low effort via something like an open research electronic lab notebook.
The Potential of Networked Results Reporting
You might ask, “but will this solve the problem of negative results? After all, others still need to make time to read them?” Well, reading a condensed presentation of a protocol and its (negative) result – i.e. something that doesn’t need to be a whole paper with formal beginning, middle and end, so to speak – could be quicker than finding and reading an ordinary article. But the potential of the “system” (i.e. a consortium sharing their research in real time from conception to results) to do something even more innovative is immense. For example, via artificial intelligence and fingerprinting technology, an experiment being planned in real time could trigger a system’s software to compare it with all known instances of the concepts, and give the researcher in question feedback, e.g. “Already done and failed”, or “Done in a similar variant…”, or “Not yet attempted in this group” etc. etc.
Furthermore, other researchers in the consortium would be able to comment on the emerging plan, inform it from their perspectives and hence likely increase its chances of success, or advise against it if it’s unlikely to work. The list of further things that the system could do, or do better than current conventions, is enormous.
The Risks That Come With The Rewards
Even in this small example, there are very many points at which trust between consortium members is implicit to the system working. Trust that others will not take advantage of your result to “beat you” to a publication; trust that they will not “misuse” or “misrepresent” or “misinterpret” your findings in their own work before yours is published; trust that they will give you timely and honest feedback on an experiment in planning or the results thereof; trust that the information stays within the group until formal publication.
This is a good example, because – at a high level – it presents principle and practice in a way that enables us to see the great benefits of open research, but also important caveats. The latter refer to human nature and the need to be “wary” whilst seeking to improve things for everyone via collaboration and sharing. This is, of course, one of the classic dilemmas of humankind, and the positive angle is “nothing risked, nothing gained”.
In the case of open research, this point is arguably brought into unusually sharp, and sometimes poignant, focus. The “others” for whom your research maybe useful could be anyone, and yet you have a very personal stake in work that is very likely unique. This enormous disparity between the scale of origination and the scale of awareness/use of research findings introduces great tension into the system. There are several ways that risks could theoretically be minimized, regardless of whether they’d be practical, (e.g. quantitatively, or temporally-staged releases of information; or entity-based staging of releases—first to the inner field, then to the wider community, then to all researchers etc. – at each stage applying controls depending on solicited feedback from the previous group).
Or should, on the other hand, risks simply be taken at large scale with the confidence that the trade-off is worth the benefit?
What Do You Think?
We are interested in your opinions and suggestions on this fundamental aspect of research culture that is set to become ever more common. Please get in touch.
About the AuthorMore Content by Andrew Moore