Annual Meeting Reports

Bridging AI and Human Expertise for Sustainable Scholarly Communication: Enhancing Integrity and Efficiency

MODERATOR:
Chirag “Jay” Patel
Cactus Communications

SPEAKERS:
Renee Hoch
PLOS

Chhavi Chauhan
American Society for Investigative Pathology

Matt Giampoala
American Geophysical Union

REPORTER:
Michele Springer
Caudex

 

Artificial intelligence (AI) seems to be taking over the scholarly publishing industry. Everyone is talking about it—the good, the bad, and the scary. This session explored the transformative potential of AI in scholarly publishing and examined how we, as humans, can work with AI to strengthen the integrity of academic publications and expedite knowledge dissemination. It shed light on the synergistic relationship between AI and human expertise and discussed ways to utilize AI to achieve the goal of long-term sustainability in scholarly publishing. 

Challenges and Opportunities of AI

Chirag “Jay” Patel opened the session by discussing the challenges and opportunities associated with AI. Some of the main challenges include biases, data privacy, and lack of transparency in terms of AI use. On the contrary, generative AI has incredible potential to broaden audiences and increase reachability regarding accessibility with tools, such as live translations and text-to-word. Patel believes that although AI will change the way we work, it won’t take our jobs. For the best outcome, AI and human expertise need to be used together. There are many opportunities for AI in publishing. It is our responsibility to check out the different models, test them, and create prompts that will serve our needs. 

Speakers were asked how AI can help increase sustainability and efficiency in publishing workloads. Patel said to remember the acronym HITL, or “human in the loop”—AI can increase outputs and efficiencies, but it’s important to keep humans involved. Matt Giampoala echoed this sentiment, stating that while AI can cut down the time we spend on processes, it’s important to keep humans involved. Chhavi Chauhan agreed, saying that AI can be leveraged to decrease turnaround times, cut costs, and increase accessibility. Using AI to translate publications into other languages can help increase access to high-impact publications.

AI Tools and Techniques

Next, the speakers were asked about specific AI tools and techniques currently being used in scholarly publishing. Renee Hoch provided some examples, stating that AI is helpful for detecting plagiarism and paper mill content, and that STM Integrity Hub is working on tools that can detect duplicate submissions, both within and across publishers. AI can also be used to identify issues with reference lists, verify reagents, and flag image integrity issues—Proofig and Imagetwin are 2 examples of this type of program. With AI, there is a lot of opportunity to enhance integrity checks prior to publication. 

“I don’t think technology is going to save us. I think we have to rely on our social systems and make policies on how to move forward with AI.”
—Matt Giampoala

The speakers were asked which tools should be exposed to authors for presubmission use, and which tools should be reserved for internal integrity checks. Patel shared an example of an editor who uses ChatGPT to write better letters to authors whose work is rejected. Rather than a generic letter, ChatGPT can help write customized, personalized letters explaining why manuscripts were rejected and sometimes suggesting alternate journals. So far, this has been well received by authors. Chauhan added another example: Her organization partners with Elsevier, which is rolling out an AI tool that will scan an article when it is submitted to assess scope and make recommendations to a human editor about how well the article aligns with the target journal. This does not eliminate the human element, but rather makes the decision-making process faster and easier. 

Chauhan also discussed key issues publishers should consider before incorporating AI. There are some ethical concerns regarding generative AI and large language models (LLMs). LLMs provide outputs based on the prompts that we create. As we improve our prompting, we receive better outputs. However, in some instances, these LLMs are being monetized (e.g., you might get better outputs if you used the paid version of a program). Chauhan wonders if, by using these models more and more and incorporating them into our workloads, we are creating disparities for those who might not be able to afford these tools (e.g., those in resource-limited settings). She also noted that people in rural areas or places without reliable Internet connections might not have access to the same resources as others.

Giampoala added that there is always potential for bias. Biases exist in humans, and we might inadvertently introduce bias into AI when we program or prompt. Hoch flagged privacy and confidentiality as concerns—if you are using a tool that requires you to upload content for unpublished submissions, this could breach privacy. Publishers should consider this and determine if it needs to be addressed in their policies or author agreements. 

Applications of AI

Next, the speakers were asked about AI usage in peer review, specifically with ethics in mind. Hoch answered first, saying she does not think generative AI will replace editors and reviewers. Peer review is a pillar of publishing, and knowing that a manuscript has been reviewed by an expert in the field is a key reason why authors trust what is published. However, AI has the ability to provide a lot of support to reviewers (e.g., rapid literature reviews, data analysis, etc.). It is important for editors and reviewers to disclose when they use AI, and to keep in mind that they are responsible for what they write (i.e., any outputs from AI should be checked for accuracy). Chauhan added that she feels humans and their backgrounds can add more value to their reviews. A lot of the knowledge and insights we have as humans simply won’t be available to LLMs until we feed it into them. Giampoala agreed, saying that although reviewers can take advantage of AI tools, it is still the reviewer’s responsibility to act ethically. 

The next topic discussed was how AI can assist with inequities and inequalities. Hoch stated that AI can help with access to information by creating summaries in different languages or for people who have limitations in how they are able to interact with research. This can help with research progress and can allow for diverse perspectives.  

Patel then asked the speakers what is on their wish list for new AI technologies to address new and ongoing challenges in publication ethics and publishing. Hoch would choose the ability to detect fabricated data and images. Chauhan would like to see an LLM that is fed high-impact peer-reviewed material and is available globally without firewalls. Giampoala would like to see useful LLMs that are rooted in peer-reviewed scientific literature and always return to the source to work out attributions and permissions.

Audience Q&A

During the audience Q&A, speakers were asked how publishers evaluate material that has been translated by AI. Chauhan responded that there is a human element to this, and someone who understands both languages will need to check the AI’s work. 

The speakers were asked how they are using AI during day-to-day business operations. Answers included meeting summaries, meeting recordings, note-taking, categorizing survey results, generating images, developing test questions and assessments, idea generation, and summarizing research papers. 

An audience member asked which AI programs are available for users to try out. Responses included Paperpal, Writefull, Trinka, Scite, and Elicit, with the note that many more are currently in the works.