The Impact Factor is losing its impact.
Long used as a standard measure of a journal’s importance in its field, the Impact Factor (IF) was developed by Eugene Garfield, founder of the Institute for Scientific Information, before that institute became part of Thomson Reuters, and essentially counts a publication’s average number of citations per article per year. Overall citation information is valuable in its simplicity and quantifiable nature, but the session’s presenters believe that it does not give an accurate picture of importance—or impact.
Peter Shepherd, director of COUNTER, has heard “constant moaning” about the IF and its shortcomings. Critics take issue with a variety of flaws: it can distort author and publisher behavior; data are limited to the biomedical field; and citations typically understate the impact of an article and hence of a journal and say nothing of context. And the system can easily be gamed by publishing high-profile articles early in the year so that they have a longer time in which to accumulate citations.
The list of cons goes on, but the end point is clear, said Kevin A Roth, editor-in-chief of the American Journal of Pathology: “The Impact Factor does not present the whole story.” Cameron Neylon, advocacy director at the Public Library of Science, agreed. He said that the IF provides neither the right data nor the correct information—and that it is, in fact, a poor predictor of any specific article’s impact. “It is neither precise nor comprehensive nor current,” he said. He, like the other presenters, said that there are different and better ways to measure.
One alternative metric gaining prominence in the field is altmetrics. Jason Priem, the author of Altmetrics: A Manifesto, was unable to attend the session, but Shepherd filled in on his behalf and gave his assessment of the emerging metric. He discussed the movement of scholars’ work and conversations about their work to the Web; Web-centric tools—such as bookmarks, links, tweets, and blogs—offer better ways to filter the wider impact and influence of scholarly research.
Instead of evaluating the journal as a whole, Shepherd sees individual articles becoming the primary focus. In his role at COUNTER, he has helped to develop a use-based metric that focuses on online full-text single-article downloads. The use factor can count downloads anywhere an article appears online, from data repositories and digital libraries to a publisher’s or author’s Web site. It produces immediate results, is independently audited, and covers all categories of publication—more than 15,000 journals compared with the IF’s 9,000. COUNTER is in the third stage of its use-factor project, in which it is working to draft a code of practices. In its first two stages of testing, the model has proved robust.
Whereas COUNTER statistics examine numbers of article downloads, other methods go further to include additional Web-based information on consumption. Online users leave a trace online as a result of their research activities on the Web, and using these data, PLOS has helped to create metrics capable of evaluating a variety of users and activities related to scholarly information. Neylon discussed measuring an article’s total online page views, PDF downloads, tweets, mentions on Facebook and Wikipedia, and so on. The resulting statistics can give a fuller picture of who is using a particular article, how people are using it, and what they are using it for, Neylon said; ultimately, we care about people using research, not only about citations of authors’ works.
Roth said that perspective is important when it comes to measuring impact. Researchers, department chairs, and journal editors must ask, Are journal metrics helpful and important? Do they affect my decision making? Will the IF (or other metrics) play a role in the future? He answers each question in the affirmative, concluding that metrics, whatever they may be, are here to stay. They all—even the IF—have merit, and they provide different information to different people at different times.