Artificial intelligence (AI) is no longer a peripheral tool in the scientific process; it is rapidly becoming central not only to manuscript preparation, such as writing, editing, and revisions, but also to the core components of research itself, including literature review, data processing and analysis, and identifying significant outcomes. Now that the discussion around the rise of these tools has been covered ad nauseam, the focus must now shift to addressing its risks and opportunities with clear, actionable strategies.
Publishers are confronted with a growing need for a robust framework to assess and manage the risks and opportunities associated with AI tools. This article focuses on 4 concrete steps publishers can take to develop and implement an effective risk and opportunity management strategy for AI adoption, and offers clear recommendations for policy, oversight, and education.
Addressing the Risks in High Res: Building a Risk Management Framework
When the European Union (EU) created its policy on AI, it did not suffice with a one-size-fits-all approach. Rather, the EU AI Act1 established a risk register framework through which new tools and use cases could be reviewed, evaluated, and categorized. The potential risk and harm of these tools are carefully scrutinized, with subsequent reporting, compliance, and regulatory demands imposed in line with the respective profile of respective tools. For example, personal surveillance requires a much higher degree of compliance and oversight than personalized AI restaurant suggestions. The European Commission has taken those guidelines and adopted them into high-level living guidelines for AI in research.2
Unfortunately, scholarly publishing has yet to introduce the same level of granularity3 and clarity into its policy guidelines, sufficing for a generic “declaration” requirement, regardless of the nature, use, and risk the tool presents. As a result, many researchers either do not understand what is being asked of them or simply choose to ignore publisher “declaration” requirements altogether.4
The integration of AI tools into scientific publishing demands a structured and actionable risk-management framework. Moving beyond vague declarations and reactionary prohibitive policies, publishers must adopt a systematic approach that evaluates AI tools based on their specific functions, applications, and risk levels. Following are 4 suggestions for how to go about doing so:
1. Developing a Risk Profile for AI Tools
The first step is for the industry to establish a risk profile for AI tools. Not all AI applications pose the same level of risk, and treating them as a monolith oversimplifies the complexities involved. For example, language editing tools that refine grammar and style carry lower risks than tools used to generate research content or evaluate manuscript integrity. Publishers can categorize AI tools based on their core functionalities—language support, data analysis, manuscript screening, or peer review—and assign risk levels accordingly. For example, a grammar correction tool might be categorized as “low-risk,” whereas an AI tool capable of running data analysis might fall under “high-risk.”
2. Maintain a List of Approved Tools
In addition to profiling tools, publishers in similar areas should get together to develop a list of approved AI tools, vetted for reliability, transparency, and compliance with ethical standards. This approved list should be dynamic, updated regularly based on performance reviews, and made accessible to editors, authors, and reviewers. Clear communication of these approved tools will reduce uncertainty and create consistency across editorial processes. This publisher consortium could collaborate with organizations such as Ithika S&R that are maintaining an active Generative AI Product Tracker5 so they do not need to start from scratch.
3. Not All AI Use Cases Should Be Treated Equally
Another essential element mentioned in the EU’s guidelines for AI in research is the differentiation between substantive and nonsubstantive uses of AI. Substantive uses—such as generating content, analyzing results, or drafting research conclusions—carry higher risks compared with nonsubstantive uses, such as grammar corrections or formatting assistance. Another example of a substantive use case might involve AI generating a complete literature review, whereas a nonsubstantive use could involve formatting a manuscript according to journal guidelines. Publishers should clearly define these boundaries and outline acceptable levels of AI involvement in each category.
This distinction may also affect declarations and where they appear in the manuscript. For example, analyzing results would need to be declared in the methods section, whereas some other substantive uses may not (e.g., generating an abstract or introduction).
4. Back to the Basics of What Makes Good Science
We often relate to AI tools as the potential arbiters of science itself instead of tools to automate parts of the scientific process or increase efficiency when used by authors. Reliability and replicability scoring systems for submissions, regardless of whether AI tools are used or not, can provide an additional layer of oversight. Perhaps publishers should reconsider how they can evaluate submissions based on their ability to produce consistent, accurate, and reproducible results, and not whether or not AI tools were used to help them do so.
The Role of Education and Training
I have the sense that many publishers have jumped to drafting and implementing policy without their editorial teams developing a deep understanding of different AI tools and how they work. A critical aspect of AI risk management is ensuring that editorial staff, authors, and reviewers are well-versed in both the capabilities and limitations of AI tools. Editorials, such as the one published by ACS Nano in 2023, that layout best practices for authors6 when using AI tools, go a long way to promote author understanding and education, before jumping straight into policy.
Education initiatives should go beyond basic training and include practical workshops and scenario-based exercises that mirror real-world publishing challenges. Editorial teams must be trained to recognize AI-generated content, assess AI tool outputs critically, while identifying potential misuse. In the AI boot camps I have run at universities and publishers around the world over the last 12 months, authors and editors focus on technical proficiency alongside ethical awareness, while gaining a deep understanding of how the tools work and the engines that power their outputs. This empowers them to make informed decisions regarding author use and how and when they should be integrating AI tools into their own workflows.
For example, one of the most common points of confusion for publishers is differentiating between purely generative large language models, such as ChatGPT, that are prone to hallucinations, and retrieval-augmented generation systems, such as Scite, Elicit, and Perplexity, that find real scientific literature.
We Will Work Together Because We Have no Other Choice
The integration of AI into scientific publishing is not a temporary experiment—it represents a structural transformation. The next phase of AI adoption will likely see more sophisticated tools entering editorial and peer review systems, bringing both promise and new challenges.
Publishers must anticipate these advancements by building flexible risk management strategies and policies that can adapt to emerging technologies. Collaboration across the industry will be critical to build shared frameworks, joint guidelines, and industry-wide initiatives that can help standardize AI policies and prevent fragmentation across publishers.
Moreover, global partnerships with technology providers, academic institutions, and regulatory bodies will play an essential role in shaping the ethical and operational foundations of AI adoption in publishing.
The conversation around AI in scientific publishing must move beyond whether to adopt AI tools and instead focus on how to adopt them responsibly. A risk management framework tailored to the diverse applications of AI tools is the first step in this process.
By developing clear risk profiles, approving vetted tools, differentiating between substantive and nonsubstantive uses, and implementing reliability scoring systems, publishers can navigate the complexities of AI adoption with confidence. Equally important is the commitment to education and training, ensuring that every stakeholder in the publishing ecosystem understands both the opportunities and the risks of AI.
The future of scientific publishing lies not in avoiding AI but in embracing it thoughtfully, with robust safeguards in place. The responsibility now falls on publishers, editors, and researchers to collaborate in building a publishing environment where AI serves as a tool for progress, integrity, and innovation.
Disclosure
I uploaded lectures and slides I created on my own to ChatGPT for a first draft of this article. I then reviewed, revised, and edited it before sharing with ChatGPT for feedback. After implementing some changes I agreed with, I then uploaded once more to ChatGPT for an edit/proofread. The responsibility for the content in this article is mine entirely.
References and Links
- https://artificialintelligenceact.eu/high-level-summary/
- https://european-research-area.ec.europa.eu/news/living-guidelines-responsible-use-generative-ai-research-published
- https://www.digital-science.com/tldr/article/dark-matter-whats-missing-from-publishers-policies-on-ai-generative-writing/
- https://www.chronicle.com/article/scholars-are-supposed-to-say-when-they-use-ai-do-they
- https://sr.ithaka.org/our-work/generative-ai-product-tracker/
- https://pubs.acs.org/doi/10.1021/acsnano.3c01544
Avi Staiman is the founder and CEO of Academic Language Experts. He is a chef at The Scholarly Kitchen, cohost of the New Books Network “Scholarly Communication” Podcast, and a reviewer for Wiley’s Learned Publishing journal. He is a thought leader and consultant on AI tools for research, bridging the gap between publishers and authors and how to support and empower ESL researchers. He is a core member of CANGARU, where he represents EASE in creating legislation and policy for the responsible use of AI in research. He has been a guest lecturer at NYU’s Master’s Program in Translation & Interpreting and at the University of Tokyo. His essays have appeared in the Cambridge University Press Blog, The Scholarly Kitchen, Multilingual, and Times Higher Education.
Opinions expressed are those of the authors and do not necessarily reflect the opinions or policies of their employers, the Council of Science Editors, or the Editorial Board of Science Editor.