Some Recent Reading
Articles on science editing and related topics appear in a wide variety of periodicals. Presented here are brief summaries of some articles of science-editorial interest that emerged in a recent search.
Shotton D. Semantic publishing: the coming revolution in scientific journal publishing. Learned Publishing. 2009;22(2):85–94.
“We are in the opening phases of a scientific publication revolution,” Shotton— head of the Image Bioinformatics Research Group in the Department of Zoology of the University of Oxford—proclaims. He is referring to the use of the Internet to engage in “semantic publishing”, which he defines as “anything that enhances the meaning of a published journal article, facilitates its automated discovery, enables its linking to semantically related articles, provides access to data within the article in actionable form, or facilitates integration of data between papers”. In essence, he asks, how can editors use the Internet to serve the research and information needs of their readers better and thus enhance the popularity and desirability of their journals? Shotton offers some ideas, such as better tagging of terms in articles and linking the references (via hyperlinks) to the corresponding papers. Shotton also presents ideas for making information more freely available and easily usable; for example, instead of putting results online in a table, it might be helpful to put them in an Excel spreadsheet for others to download. He also emphasizes the importance of standardizing the format of information among publications, but he says, “If no preexisting standards exist, it will pay just to do whatever seems most sensible and wait for standards to emerge.”
Aarssen LW, Lortie CJ, Budden AE, Koricheva J, Leimu R, Tregenza T. Does publication in top-tier journals affect reviewer behavior? PloS ONE. 2009;4(7):e6283.
The authors, who are part of a working group on publication bias for the National Center for Ecological Analysis and Synthesis, ask whether a relationship exists between scientists having published articles in top journals (as measured by impact factor) and their rates of recommendation of an article for publication when acting as peer reviewers. The authors asked ecologists to indicate which journals, in a provided list of high–impact-factor journals, they had published in and to estimate the percentage of papers that they suggest for rejection when acting as peer reviewers. There was a correlation (with a correlation coefficient of 0.867 at a P value of less than 0.0001) between number of high-impact journals published in and recommended rejection rate of journal articles reviewed. The authors speculate that as one reads and publishes in higher–impactfactor journals, these become the standards for judging other papers, whether they are for these top journals or not. Thus, for the sake of fairness, the authors indicate, it might be worthwhile to send a paper to reviewers who have a variety of publication histories.
Drazen JM, van der Weyden MB, Sahni P, Rosenberg J, Marusic A, Laine C, Kotzin S, Horton R, Hébert PC, Haug C, Godlee F, Frizelle FA, de Leeuw PW, DeAngelis CD. Uniform format for disclosure of competing interests in ICMJE journals. N Engl J Med. 2009;361(19):1896–1897.
In this editorial, published in all International Committee of Medical Journal Editors (ICMJE) journals and posted online at www.icmje.org/coi_disclosure.pdf, the authors introduce a new form for disclosure of potential conflicts of interest. The form has been adopted by all member journals of the ICMJE, but editors will have discretion as to when in the publication process to request the disclosure form from authors. The editorial states that the standardized form is designed to make disclosure “easier for authors and less confusing for readers”. The form includes four parts, in which authors are asked to note the sources of their funding for the particular research in question, their associations in the preceding 3 years with any organization that has a vested interest in the topic of the manuscript, “any similar financial associations” of their immediate family members, and any other associations or relationships “that may be relevant to the submitted manuscript”.
Petrovečki M. The role of statistical reviewer in biomedical scientific journal. Biochemia Medica. 2009;19(3):223–230.
“In general, biomedical journal editors do not have enough knowledge, training, or skills to evaluate statistical methods and computational analyses in all manuscripts submitted for consideration,” especially techniques that are new or difficult, according to this editorial. That lack of knowledge, especially if shared by the authors of the paper and the peer reviewers, can lead to statistical errors in published manuscripts, the editorial states.
Therefore, the author endorses the use of a statistical reviewer (sometimes known as a statistical editor) to evaluate the design of a study and the statistics used to analyze the data, looking for errors or omissions. Certainly, some journals already use this practice—the article mentions the Croatian Medical Journal and The Lancet by name. Just as in the standard peer-review process, the statistical reviewer can recommend accepting the paper as is, recommend rejecting it outright if the flaws are unfixable, or suggest modifications to be made before acceptance for publication.
Woloshin S, Schwartz LM, Casella SL, Kennedy AT, Larson RJ. Press releases by academic medical centers: not so academic? Ann Intern Med. 2009;150(9):613–618.
The authors used EurekAlert—a database of science-related press releases—to select, at random, 10 press releases each from the 10 highest-ranked and 10 lowest-ranked academic medical centers, as ranked by U.S. News & World Report. To be included in the study, a press release had to describe research, not events held or awards won. The authors also interviewed a public-relations officer at each of the 20 institutions to get an idea of the process of creating and distributing press releases. The researchers found what they considered to be problems in the press releases; among the most common were exaggeration of a study’s importance or validity and missing information (such as study size, sources of funding, and appropriate caveats).
Flisher AJ. Does the impact factor have too much impact? South Afr Med J. 2009;99(4):226–228.
This editorial begins with a very brief overview of the impact factor, including its history and how it is calculated. The author then lists some of what he sees as flaws of the widespread use of the impact factor. He says that there are flaws in the concept itself: It assumes that a paper will be cited because of its high quality, but it may be cited because it is of extremely low quality and later researchers are criticizing it or refuting the results. In addition, a paper that presents a technique might be commonly cited, and “such a paper will often be of high quality,” the editorial says, “but this is not necessarily so.” Review articles, too, by their nature are cited often, as are papers that are cheaply or easily accessible. The editorial also points out problems in using a 2-year time frame to count citations for the impact factor, in that some papers might be recognized as being of value only later. That might be an issue, the editorial says, especially in interdisciplinary papers or journals, inasmuch as some disciplines traditionally move more quickly than others.
The author includes a table of ways in which an editor can manipulate the impact factor of a journal, although he explicitly says that he is not suggesting that editors adopt these practices. They include favoring the publication of particular types of papers, such as review articles and papers introducing new techniques or scales, and discouraging others, such as clinical papers and papers in fields that are not quickly expanding; ensuring that papers most likely to be cited appear at the beginning of the year; not publishing supplements; and doing as much internal citing as possible. Internal citing can be increased by increasing the number of letters and editorials that refer to articles in the journal and having a “continuity of themes” between issues of the journal, which will then increase the number of papers from earlier issues of the journal that are cited. Ultimately, the author does not voice an opinion about the continued use of the impact factor except to say that the impact factor of a journal that has published a scientist’s papers should be just one factor in evaluating his or her work.
Christina Sumners, a graduate student in science and technology journalism at Texas A&M University, prepared this column while a Science Editor intern.