Researchers have responded to the challenge of the coronavirus with a commitment to speed and cooperation, featuring the rapid sharing of preliminary findings through “preprints,” scientific manuscripts that have not yet undergone formal peer review.
We are thrilled that researchers have embraced preprints, which are making new ideas, data and discoveries about the pandemic available to scientists and the public in almost real time. An example is the work of Bhramar Mukherjee and her team at the University of Michigan, whose research modeling the Covid-19 outbreak in India helped guide that government’s lockdown policies.
But the open dissemination of early versions of papers has created a challenge: how to ensure that policymakers and the public do not act too hastily on early studies that are soon shown to have serious errors.
A case in point occurred in April when a group of scientists from Stanford University posted the results of a study of previous exposure to the coronavirus in 3,300 residents of Santa Clara, Calif. Their paper, published on the preprint server medRxiv, concluded that “the number of infections is 50- to 85-fold larger than the number of cases detected,” suggesting that the fatality rates due to the coronavirus were much lower than previously thought.
Given the policy implications of such a result, it is no surprise that the study received immediate attention on social media, and in the local and national press. But the methods and results of the study were quickly questioned by scientists, who used Twitter, blogs and online comments on medRxiv to air concerns about the study’s design, the reliability of the antibody tests and the statistical methodology, including important errors in the basic mathematical formulas.
This coupling of rapid dissemination with an informal, crowdsourced form of peer review reflects a new and potentially transformative way to do science. But the speed of modern journalism, and the lack of familiarity of the press and public with preprints, meant that despite being largely debunked, the results were pretty much taken at face value.
It has always been a challenge for science journalists to balance the results of individual studies against the complex and often contentious process by which science converges on a better understanding of reality. In the past, because they were generally reporting on studies that had been through peer review at a scholarly journal, journalists could be confident that the work they were describing had received at least some scrutiny from independent scientists, even if that did not guarantee its accuracy.
But the slow and staid system of journal peer review in its current form offers little help to journalists in the rapid-fire world of preprints, especially amid a pandemic when there are strong forces aligned against patience. The best initial reporting on the Stanford study incorporated the concerns raised by scientists on Twitter. But we realize that we cannot rely on this as the sole means of guarding against the overly hasty application of science reported in preprints.
That is why we and a group of over 100 scientists are calling for American scientists and journalists to join forces to create a rapid-review service for preprints of broad public interest. It would corral a diverse contingent of scientists ready to comment on new preprints and to be responsive to reporters on deadline. This would provide journalists reliable access to independent scientists to help deal with today’s growing stream of preprints.
Such a service would collaborate with journals and other respected organizations working at the interface between science and journalism. For example, SciLine — a philanthropically supported free service for journalists based at the American Association for the Advancement of Science — mediates hundreds of media interviews with scientists every year, and similar organizations exist in Britain, Germany, Australia and New Zealand. The service would also collaborate with professional organizations, such as the American Statistical Association, to recruit a team of volunteers with the expertise needed to assess new preprints on journalists’ timelines.
We hope that scientists will step up to this need and provide journalists with the tools they need to better understand the research and convey its practical message. The public and policymakers must demand this kind of scrutiny before they turn the latest science on Covid-19 or anything else into policy or individual action.
Michael Eisen is a computational biologist at the University of California, Berkeley. Robert Tibshirani is a statistician at Stanford University.
The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: firstname.lastname@example.org.