Annual Meeting Reports

Artificial Intelligence in Scholarly Publishing: Responding to Opportunities and Risks

Download Article

MODERATORS: 
Tony Alves
Highwire Press

Patty Baskin
American Academy of Neurology

SPEAKERS:
Robert Althoff
UVM Health Network

Robin Champieux
Oregon Health & Science University

Hilary Peterson
American Psychological Association

Heather Staines
Delta Think

REPORTER:
Peter J Olson
JAMA Network

Never trust anything that can think for itself if you can’t see where it keeps its brain. 

When Arthur Weasley admonishes his daughter Ginny with the above adage in JK Rowling’s Harry Potter and the Chamber of Secrets, he’s referring to a deviously sentient diary that exists within a fictional world of magic and sorcery. Yet, as quoted by Hilary Peterson at the CSE 2024 Annual Meeting, his warning aptly evokes the concerns and trepidations that surround the use of artificial intelligence (AI) in the very real world of scholarly publishing. In a timely and fascinating session moderated by Tony Alves and Patty Baskin, Peterson and her fellow panelists addressed both the risks and opportunities connected with AI usage in the publishing process by providing their perspectives, sharing their experiences, and—without the aid of a crystal ball—offering their thoughts about the future.

Cases in Point

Alves kick-started the session by inviting each panelist to describe their encounters with AI within the context of their respective professional roles. Robert Althoff, Associate Editor of the Journal of the American Academy of Child and Adolescent Psychiatry (JAACAP), UVM Health Network, shared 2 tales from a journal editor’s perspective. In the first tale, the JAACAP editors encountered a review that was “a little off,” and an AI detection program indicated a 92% likelihood that the review had been penned by AI. The relatively new reviewer admitted to having used an AI tool out of curiosity, but stressed that no confidential information had been included in the review. In response, JAACAP added a question to their reviewer submission system: “Have you used AI or AI-assisted technology in your review?” In the second tale, another author ran their reviewer responses through an AI tool to ensure they were adequate. Although this scenario raises different questions regarding confidentiality, it nonetheless prompted another update to the JAACAP submission system to ask authors whether AI or AI-assisted technology had been used to respond to reviews. 

In her role as university librarian at Oregon Health & Science University (OHSU), Robin Champieux focuses on scholarly communication, rigor and reproducibility, and open science, which—as she noted—equips her with 2 unique lenses through which she views the AI landscape: as a leader of a biomedical library and as an advocate of open access and rigor and reproducibility. Peering through these lenses, Champieux cited 3 AI-related endeavors at OHSU. First, there is a continual effort to help researchers navigate publishing and scholarly communication activities and decision-making as they relate to engaging AI in terms of its ethics and transparency (among other things). Second, both educators and learners are taught to scaffold their AI literacy; this is particularly important at OHSU, where the learners are also authors. Finally, because libraries are the stewards of information, OHSU staff are constantly considering not only how AI tools impact access to information, but also how they intersect with use of copyrighted and licensed content. 

When it comes to AI and AI-assisted technology, Peterson, as Associate Publisher at the American Psychological Association (APA), is primarily concerned with publication policy—though she noted that the APA relies heavily on the expertise of its community to help develop the policies that govern its 90 journals. In 2023, the APA Publications and Communications Board ratified a policy that is consistent with the policies of other publishing institutions: AI cannot be considered an author because it cannot meet the responsibilities that come with authorship, cannot sign forms, and cannot attest to the content of an article. Furthermore, authors are required to disclose the use of any AI tools and to upload any output as supplemental material; however, the latter requirement is proving challenging, Peterson said, as many authors don’t retain the output or indicate that it is too voluminous to provide. Such challenges present a greater conundrum of whether to renew a policy in response to every new-use case that comes along, particularly given how rapidly the field is evolving.

Heather Staines continued the conversation around publication policy from the perspective of an industry consultant. As Director of Community Engagement at Delta Think, Staines stressed that helping a client develop an AI policy is not a one-size-fits-all endeavor given the myriad variables at play, including an institution’s mission, the discipline within which it operates, and the pace at which AI tools are evolving. On top of that, important conversations are being had around AI tool investments—namely, to ensure that the integration of a given tool for a given client will be feasible and sustainable by assessing whether the tool creator has a business model and directional goals that are compatible with the client’s own goals. Staines also noted that her concerns around such investments in the education space overlap with Champieux’s, particularly when it comes to cost; although many academic librarians might like to add certain AI tools and services to other content they’ve already licensed, their flat or decreasing budgets are preventing them from doing so. 

Opportunities

Alves then asked the panelists about the potential opportunities presented by AI in the world of scholarly publishing. Champieux once again championed the concept of rigor and reproducibility, the subject of a PhD course she teaches at OHSU. Recently, she and her students attempted to improve the Methods section of a paper on cell line authentication by entering it into ChatGPT, and the results were “quite impressive.” Additionally, she said, AI-assisted technology can provide her students with opportunities to simulate real-world practices and problem-solving that are otherwise sparse in a classroom setting. Althoff added that from the journal editor’s standpoint, there is excitement about the many AI tools that have the capability to streamline workflows, improve scope checks, and enforce accountability. As one example, he opined that if a journal has an adequate and well-established review process, that process could be learned by an AI tool to assist journal editors with prior probabilities—something that humans are generally “terrible” at. If a computer can help establish those prior probabilities, Althoff said, a human can then take that information and learn how to apply it properly to create a more efficient process.

Harking back to her belief that publishing institutions must engage with their communities, Peterson framed the potential opportunities for publishers within this context. Noting that the process of establishing publication policy has to be “bottom-up” vs “top-down,” she stressed that publishers should resist taking an authoritative approach and instead foster a culture in which a community’s ethics are the drivers of AI policy-making and sustainable best practices. One particularly encouraging endeavor she cited is CANGARU,1 a meta-analysis of publication policies and instructions for authors designed to establish a unified set of AI-related policy standards within the scholarly publishing industry. 

Alves then asked Staines if she thought AI was just a flash in the pan. Staines, a historian, responded by saying that she takes a longer view of things. The industry is still in the very early stages of AI-assisted technology, she said, and “there are a lot of smart people out there who will figure some of these things out.” She then suggested that it would be interesting to revisit this session at the CSE 2030 Annual Meeting to reflect on the things we thought would be a concern but weren’t. Saying that she doesn’t believe AI is “the end of knowledge” (in a playfully apocalyptic tone), she fully believes that the future will simply look different than what we might imagine now.

Risks and Unintended Consequences

Opportunities are usually accompanied by risks, and each panelist went on to discuss the risks that concern them most. Althoff’s primary concern is ownership. There’s no doubt that AI algorithms are going to improve more than we can predict, he said, so industry leaders must educate their constituents about the ethics of AI-assisted technology, think carefully about how they’re promoting the use of AI tools—and avoid becoming subservient to them. He closed by saying that education and community engagement will be the keys to mitigating the many risks involved. Champieux echoed this sentiment and said that OHSU is having similar conversations around ethics and education. Furthermore, she has underlying concerns about equity and accessibility. Engagement with AI tools is increasingly becoming a workplace experience requirement, which for her, raises questions about the equitable distribution of engagement opportunities as well as the inherent accessibility of such tools to all learners and researchers. 

Staines affirmed Champieux’s musings about equity by noting that such concerns are shared by many of her clients: What are the biases of the training dataset? What biases are built into the prompts? Does the diversity, equity, and inclusion (DEI) benchmarking tool for reviewers and authors incorporate non-Western names? Noting that the bar for DEI best practices is moving constantly—and rapidly—Staines said that AI policies that cross over with a publisher’s DEI-related initiatives will need to be increasingly more sophisticated to accommodate and anticipate ever-evolving ethical considerations, particularly for disciplines in which the gestational period of an article is much longer than that of other disciplines.

Peterson closed the session by reciting the aforementioned Harry Potter quote to express her primary concerns: privacy, confidentiality, and a general distrust of AI. The latter concern is particularly prominent when it comes to citations of source material, she said. If an AI tool is used to cite sources, how disconnected might those citations be from the original material, and to what degree might they be misrepresented or even plagiarized? Does an AI tool know if an article has been corrected or retracted? If a preprint is cited, does the tool know if changes were made between the preprint and the ultimate publication? For Peterson, sacrificing such authenticities for efficiency is a substantial concern.

The Future

No longer the stuff of fiction and fantasy, AI is here to stay—and how the scholarly publishing industry should best use it to further the field while also preserving the integrity of the publication process is far from simple. Fortunately, the industry is rife with role players like the panelists for this session: influential, expert publishers and practitioners who are asking the important questions, proceeding with caution and flexibility, and establishing reasonable and responsible policies, all while maintaining an optimism that an AI-assisted greater good is indeed an achievable goal.

References and Links

  1. Cacciamani GE, Eppler MB, Ganjavi C, Pekan A, Biedermann B, Collins GS, Gill IS. Development of the ChatGPT, Generative Artificial Intelligence and Natural Large Language Models for Accountable Reporting and Use (CANGARU) guidelines [preprint]. arXiv. 2023;2307.08974v1. https://doi.org/10.48550/arXiv.2307.08974.