Last week I had the pleasure of attending a workshop in Washington DC organized by the National Academies of Sciences, Engineering, and Medicine “to discuss the current state of transparency in reporting pre-clinical biomedical research” entitled “Enhancing Scientific Reproducibility through Transparent Reporting.” I intend to provide a more detailed meeting report in the upcoming Fall issue of Science Editor (posting later this month), but I wanted to cover a few highlights here.
This workshop was part of a larger NASEM committee project exploring the role of “Reproducibility and Replicability in Science”, which just released a comprehensive consensus study report that is available for free online. The report is an excellent read that begins with an epistemological look at science itself, moves into thought-provoking definitions of reproducibility and replicability, and finishes by covering ongoing efforts and recommendations to improve and reproducibility, replicability, and overall confidence in science. Committee Chair Harvey Fineberg summed up the report by stating that while there is no reproducibility crisis, there should be no complacency either.
If you aren’t able to read the full report, I highly recommend their brief one-pager: 10 Things to Know About Reproducibility and Replicability. The list includes the report’s definitions of reproducibility and replicability, terms that tend to be used interchangeably but can have useful distinctions if separated.
The report defines reproducibility narrowly, in a way that is sometimes referred to as computational reproducibility: being able to take the same data, code, methods, etc. and produce the same results. Replicability then is being able to generate consistent results across studies using different data but trying to answer the same question; for example, a drug trial that shows effectiveness in one population should be just as effective in a similar population.
While these definitions aren’t universally accepted, there is real value in thinking about the differences, especially in terms of failure. Failure to reproduce is almost always a bad sign as it can indicate, at worst, a problem with the data, code, or some other element that can cast serious doubts on the validity of the original results, and at best, a lack of transparency in design or methods. That is why to improve reproducibility, some journals are requiring authors make their data and code publicly available for anyone to scrutinize. As Sarah Brooks, an editor at the American Journal of Political Science (AJPS), reported during the Science Editor Symposium at the recent CSE Annual Meeting, a few journals, like AJPS, are taking the extra step to have all research computationally reproduced by an independent service before acceptance and publication. The recently posted Meeting Report on the symposium has more on this initiative and others.
Failure to replicate, however, is more nuanced. Because you are trying to see if you will get similar results with a new data set or sample, a failure to replicate can reveal hitherto unnoticed distinctions between the two sets of data, leading to additional scientific insights. For example, it was in part concerns over the problem replicating key preclinical animal research that led the National Institutes of Health to, relatively recently, start requiring that sex be considered as a biological variable. As with reproducibility, greater transparency will also be key to greater replicability: the more that is known about a study’s design and methods, the more likely another researcher will be able to accurately replicate them.
Journals have an important role in helping to enforce appropriate and consistent transparency upon publication, but as I will discuss further in the upcoming meeting report, it will take all of the stakeholders of the scientific community, including institutions, funders, researchers, and publishers, working together to make a difference.
– Jonathan Schultz
Editor-in-Chief, Science Editor
Resource of the Month
Being an editor and working at scientific publication requires being ever knowledgeable of a rapidly changing scientific and publishing landscape, so each month we highlight a resource that will hopefully make this at least a little bit easier.
CSE Short Courses are coming to DC with the Fall 2019 CSE Short Course on the Road: Publication Management on November 13, 2019. This short course is being taught by a great group of experienced speakers and is really one of the best ways to learn about or stay up-to-date on “the wide-ranging role of managing editors and publication managers as well as the daily challenges they face.”
Recent Early Online Article
Continuing the topic of reproducibility, I strongly encourage you to check out Three Approaches to Support Reproducible Research. Sowmya Swaminathan (who was instrumental in the development of the MDAR framework) and co-authors outline for implementing a checklist for transparent reporting in life science articles, supporting computational reproducibility through peer review of code, and publishing registered reports, an “innovative article format aiming to reduce publication bias.” If you or your organization is interested in any of those items, this article is a very helpful resource.
Hot Articles from Recent Issues (For CSE Members only)
As a CSE member benefit, once Science Editor articles are moved to an issue, they are available only to CSE Members for one year.
In general, peer reviewers are supposed to focus on the science and leave grammar and writing mistakes for copyeditors to fix down the line. However, it makes intuitive sense that the quality of the writing may still sway reviewers a bit; for example, a poorly written manuscript can be hard for a reviewer to get through. In her article, Understanding the Importance of Copyediting in Peer-Reviewed Manuscripts, Resa Roth explored this idea and “sought to determine whether the frequency of positive, neutral/unknown, or negative copyediting terminology was correlated with submission outcome (reject and different types of accept).”
Not a CSE member? Additional membership info along with instructions for becoming a member of the Council of Science Editors can be found here.
From the Archives
This month the National Institutes of Health will start requiring ORCiDs for a selection of their grant recipients, and presumably, this requirement will soon extend to all NIH-supported researchers. In the biomedical sciences, at least, an NIH requirement is usually the tipping point for mass adoption. So it’s interesting to look back at this interview from 2015 with Laurel Haak, Executive Director of ORCID, to see how far they have come in a relatively short period of time: ORCID: In Full Bloom
Science Editor Newsletter: Year One
We sent the first Science Editor Newsletter last October, and I really appreciate the readers who have reached out to me with feedback, answered questions, or let me know they enjoy this monthly update. It’s also great to see that the click-through rates indicate a large percentage of you are going on to read the articles I highlight each month, and I hope you have found them to be helpful and informative.
My goal has always been to make this Newsletter an avenue for feedback and collaboration, so I strongly encourage anyone who has suggestions for articles, topics, or interesting news you think we should cover to send an email to scienceeditor@councilscienceeditors.org. As I noted in the first Newsletter, if you know anyone who is not a CSE member who may enjoy this, I encourage you to forward this to them. At the bottom of each Newsletter is a link to sign up to start receiving, and you never know, maybe they’ll be inspired to become a CSE member (if the person I’m describing is you, here’s a link to join).
Now onto another year! But first, maybe it’s time for a vacation…
Feedback and suggestions are always welcome at scienceeditor@councilscienceeditors.org.
We are also always looking for new submissions or article suggestions you may have; for more details, see our Information for Authors.