“Data are power” was the decisive theme strung throughout the Short Course on Journal Metrics. Anecdotes from authors about long manuscript delays and editor complaints about lazy reviewers will persist, but the numbers tell the truth. This panel of self-declared “data geeks” reinforced this truth: from ferreting out poor-performing reviewers to analyzing your competition, data leverage power and lead to informed decision making.
Glenn Landis initiated this metric-centric discussion with the directive to know your journal. When submissions enter your system, measure key touch points from submission to assignment to production. These turnaround times tell the story of your staff and editors. Taken in aggregate, they can help you to assess where manuscripts may be sitting too long and allow you to set goal timeframes and reconnect with staff and editors on expectations.
Every journal should audit its impact factor (IF) statistics as soon as published to target inaccuracies. Extract all included articles from Web of Science by DOI using the same timeframe as for the published IF and export the data into an Excel pivot table. List article types and numbers to obtain the denominator. The numerator is often undercounted. If you find specific problems, you can report them to Thomson Reuters but must do so soon after the IF is released.
Continuing the story of metrics, Carissa Gilman, managing editor, Cancer, approached the age-old question of how and why your readers view published content. The old print circulation models no longer apply in the current landscape. Publishers are proactively tracking data that are ripe for the picking. Editors can also extract valuable statistics from open source, proprietary, and hosted analytics programs such as Google Analytics, Webtrends Analytics, and Adobe Analytics.
It is also important for editors to conduct their own mini-analyses by navigating their sites regularly: assess the number of clicks it takes to find specific content; test search function by author, title, subject keywords, and DOIs; Google your own content to ensure it appears on the first page of results. With these data you can make informed decisions about site changes depending, of course, on time and current resources.
Angela Cochran from the American Society for Civil Engineers provided a case study of successfully using data to change editor behavior. Across the portfolio of 35 journals, the average turnaround from submission to first decision was 7 months in 2005. In 2014, this number had fallen to 3 months. Although the editors complained of poor reviewer turnaround times, data indicated that manuscripts were sitting in editor versus reviewer queues. The editors began to take responsibility and improve practices. Editors are even requesting more granular data, and these data “report cards” are distributed to all members. Late reports are run and distributed when manuscripts pass a specific deadline for that touch point.
Data-driven metrics can also spark new product development. Cochran noted that editors should routinely examine types of published papers, topic areas, authors, and countries of origin. By conducting analyses of trends, rejected material, and your competition, you may find that it makes sense to consider launching a spinoff journal. She emphasized that serious consideration of the investment of time and resources is warranted. If your high-quality papers are going elsewhere and being cited, this is a positive indicator for adding a spinoff journal to the existing portfolio.
Landis outlined which segments to examine in a competitive analysis. Using Web of Science, Journal Citation Reports, or SCOPUS, you can determine your own citations, journals that are citing your articles, journals in which the top articles are being published, where highly cited articles are published, and who the top authors are, all of which will enable comparison of performance across journals. Google Scholar can provide snapshot metrics to assess the visibility and influence of recent articles in scholarly publications.
Although you should look at your competition’s classic metrics of impact including Eigenfactor and Article Influence Score, also review the media coverage and where your society meeting plenary reports are published. Survey your authors to gauge the reasons for submitting to your journal, including speed of review and first decision, open-access options, audience, and branding. Ask authors where else they have submitted and the reasons for choosing other journals.
Gilman then defined the IF and outlined some shortcomings of this metric. Universities still use this classic measurement to determine faculty tenure, librarians use it to make decisions about which journals to purchase as their budgets diminish, and authors use it to assess where to submit. However, the drawbacks are well-established: the analysis does not correct for self-citations and review articles can increase the score, among others. Journals can also employ unethical “gaming” practices of self-citation, citation stacking, and citation cartels. Cartels can be formed by a group of journals whose authors cite articles in the group’s journals, thereby increasing IFs within the group. These inappropriate actions can result in exclusion from the Journal Citation Reports (JCR).
Journal performance should not be based on one metric. Some adjunct or alternative measurements are available. JCR can also provide Immediacy Index, Cited Half-Life, 5-Year JIF, Eigenfactor Metrics, and Journal Self Cites. The strengths and weaknesses of other metrics including PageRank, Eigenfactor, Article Influence Score, SCImago Journal Rank (SJR), Impact Per Publication (IPP), Source-Normalized Impact per Paper (SNIP), and h-index were outlined. M-index, g-index, e-index, h-index, c-index, and Google’s I 10-index are author-based metrics that can be used to gauge your—and your competitors’— influence in the field.
Phill Jones, head of outreach at Digital Science, discussed the traditional metrics of IF and citations and how these measurements actually play out in the real world. Traditional metrics are increasingly lagging behind. Although they still may indicate academic impact, the reality is that funders are now seeking ways in which researchers can show proof of social impact. The results of research may yield policy change, or lead to obtaining a patent on devices that would mitigate poverty or robust clinical trials can lead to improved patient treatment. It is important for researchers to show impact to funders, and alternative metrics can monitor these conversations in real time.
Furthering this concept, Sara Rouhi, Product Specialist, Altmetric, noted that these complementary measurements—or altmetrics—gauge immediate attention to research. This interaction is measured from nontraditional sources such as social media, blogs, and policy documents and also relays who is interacting with your content, e.g. practitioners, general public, or academicians. This usage can be leveraged for specific authors to describe the story of their article’s impact, thereby securing greater grant funding, solidifying their reputation in the field, and tracking their competitors. The company Altmetric provides products that are rapidly gaining traction as an industry standard. Publishers can display Altmetric badges and widgets that link to realtime conversations (clicks, Tweets, posts, blogs) about a specific article. Publishers can provide these data as a free service to authors. Altmetrics can also leverage marketing, gauge who is competing for authors, and track trends.
Importantly, this course continually challenged the audience to assess: What are you trying to solve? Avoid extracting data for the sake of extracting data. Dissect each aspect of a problem so that the solution providers can assess all of those problem sets and work backward to pinpoint and glean necessary data.