Academic Publishers’ AI Policies

Research Question

What are academic publishers’ policies on author use of AI, and what can we extrapolate from these policies to examine how trade publishers might make a stance on AI use by authors?

Introduction

Academic publishers are currently much more explicit about their policies regarding AI use by authors. Many trade publishers either have no public statement or only a vague mention of being “human authored,” something that is not as cut and dry as it may seem. The policies being presented by academic publishers and journals are by no means all inclusive and foolproof, but they provide excellent context for what considerations need to be made when using AI. What these policies lack also provides clear evidence of how this conversation around AI use—and policing its use—needs to continue and evolve. By analyzing the strengths and weaknesses of these policies, there are some insights that trade publishers can take away as they begin (or should begin) crafting their own policies on AI use in writing. They work toward answering the question of how much AI use in writing is permissible. How far is too far, and how do we know how far is too far? These are the questions publishers have to answer as they create their own policies on AI use by authors.

This research is not aiming to answer whether AI should be used, nor if it is being used. As we have discussed in class there are a lot of companies that are integrating its use into their workflows. For AI research and writing, this is a tool that is being promoted to academics and being examined and used within research. Frontiers is a research publisher (who does have a policy on AI that I analyzed) who promotes a secondary service called Editage to help researchers use AI in their writing and editing. Freelance science journalist Sneha Khedkar wrote an article on their site detailing helpful ways researchers can use AI in their writing. She promotes its organizational abilities for compiling research, time saving abilities for annotating notes, and copyediting help to rewrite paragraphs and give grammar suggestions (Khedkar). I include this information for context that these tools are being used, so now it is up to publishers to catch up and decide how to control that usage.

Methods and Textual Review

In order to answer this question, I looked at twenty-one academic publishers who had explicit AI policies in their submission guidelines (or code of conduct) and analyzed these statements to pick out the common elements. My selection was based on the presence of the policy or the ease of finding their guidelines. There are also organizations like Purdue University and University of California, Davis who compiled lists of academic publishers with policies regarding AI use. These were helpful for me to build a sample to look at. In an effort to be more representative, I looked at university presses who list submission guidelines (not all do publicly, rather they offer them after an inquiry is made) and academic publishers and journals. There are also a number of larger associations that many smaller journals draw inspiration from. The International Committee of Medical Journal Editors (ICMJE) and the Committee of Public Ethics (COPE) are two referenced often by other publishers. After compiling information from all the twenty-one publishers, I looked specifically at their policies on AI and writing (or editing). Many of them also note guidelines on AI use for proofreading, but I am looking specifically at author use, so I did not include those here.

The guidelines all very explicitly state that AI cannot be listed or credited as an author or coauthor. The general standpoint is that AI cannot be held accountable for the content or information that it generates. These policies all state that it is the human author(s) who is accountable and liable for accuracy, veracity, integrity, and originality of the work and information. Many of the publishers point out that AI tools can output inaccurate, incomplete, or biased information that the human author(s) is responsible for correcting. AI tools have been documented “hallucinating” or generating content that is inaccurate or fake. As we have discussed in class, GenAI can create or misattribute citations. The publisher policies are taking this information into account and putting the responsibility on the human author(s) to ensure there is no plagiarized or false content in research. Where they begin to differ is in their expectations on reporting and documenting AI use and what needs to be reported and documented. A lot of this comes down to the questions around whether AI is being used for copyediting, how in-depth it is being used for copyediting, and what uses are unethical. Analyzing all these facets helps showcase the bigger picture for AI regulations.

SWOT Analysis

Strengths

            It is a simple truth that AI tools are here and they are going to be used as the technology grows. Simply having a policy around its use is already a big step toward potentially unifying these practices to ensure consistency across publishing. Having a consensus among researchers, writers, and publishers alike about what is acceptable and unacceptable helps create a clear picture for use and allows for clear ramifications for misuse. If no policy exists, or they are not consistent, then there is no way to decide if one use is fair and one is not. Taking a stance is the first step toward deciding on ethical uses of AI and allows for the continued development of these tools to be integrated into the process.

            One potential benefit that AI use in writing has, it can be a helpful tool for non-native English speakers, who are required to submit their research in English, to publish their research. There was a study on the impacts of AI detection tools and its impact on non-native English speakers by Hin Boo Wee and James D. Reimer. They note that “non-English academics often write in English to reach a wider audience, and AI-aided tools can aid in rapidly achieving the English-language proficiency levels required of publication” (Wee). Publication is an important—often integral—part of a researcher’s career development. Success and competency of an academic professional can be founded on the number of publications that they have (Rawat). Allowing these tools to be used for translating research into English and at the literacy levels required/expected for academics means more opportunities could flourish for non-English speaking researchers.

One of the most common threads in these policies is on authorship. Throughout all AI policies I looked at it is heavily stressed that AI cannot be an author or coauthor. This does not mean AI cannot be used, and furthermore it highlights that the responsibility and liability for the work is on the human author(s). This is an important distinction to make because it omits the ability for AI to become a scapegoat for spreading misinformation, plagiarism, copyright infringement, or bias.

Weaknesses

            One of the biggest weaknesses amongst these policies is the vague language used, specifically around what constitutes AI use and subsequently what use needs to be documented and how. One of the main reasons this is such a weakness is it creates opportunities for misunderstanding, misuse (intentionally or accidentally), and leaves researchers vulnerable to negative ramifications they may not have been aware of. One research study conducted through UC Santa Barbara explored researcher perception on what is ethical use of AI, and what should or should not be documented (Chemaya). Their results found an even split between researchers who felt text rewritten by ChatGPT should be reported or not. This split is attributed to varying ideas on whether this type of use is ethical or not. The main factors that differentiated the respondents were academic level—student/PhD vs. professor/tenured professor—and English language dominance—first language vs. secondary language (Chemaya). While researchers and authors will likely always have varying ideas on what is ethical use, it is up to publishers to put forth guidelines to help create consistency amongst what is published.

However, these policies do not help clarify what use is allowed nor which should be reported. The British Journal of Radiology states “Authors who use AI tools in the writing of a manuscript, production of images or graphical elements of the paper, or in the collection and analysis of data, must be transparent in disclosing in the Materials and Methods (or similar section) of the paper how the AI tool was used and which tool was used.” However, it is unclear whether correcting grammar with AI or rewriting (copyediting) with AI is included. Other policies like Frontiers and PLOS ONE do highlight editing as well as writing with requirements to document how it was used, where, what tool, and what model in a methods or acknowledgements section. This clear disparity between what should and should not be reported means certain researchers could get flagged for AI use due to the guidelines of the journal because they weren’t required to disclose it. They could face harm to their credibility for their AI use even though it was not a part of their publication process. Having more uniform definitions for AI use and documentation are important to protect researchers, which is currently lacking across the board.

Opportunities

Avi Staiman notes that in the world of academia where publication is key, but many academics are not creative writers and language professionals, allowing AI use allows researchers to be researchers again and not worried about being authors (Staiman). For academic publishing, where creativity is not the goal and it is easier to codify original work versus not, AI writing tools give them the freedom to worry more about the content of their research and less about being able to write about it in a polished manner. They can have GenAI help write out the research in a way that is understandable to a broader public or help a researcher draft an outline for how to logically present their research.

Staiman also recommends that as people use AI tools to write and learn their limitations, professionals can also work toward a compiled list of approved AI tools for researchers to use, creating a standard set (Staiman). Researchers can train the next generation of researchers to ethically use AI and to be wary of the pitfalls of the technology so users are more informed. They can develop guides for AI prompting, what hallucinations are, and how AI produces outputs. If certain tools are more reliable than others then those can become more standardly used. If one tool is better for copyrighting and producing accurate content then those tools can be recommended for non-English speakers. If one tool can analyze research data much more effectively but when asked to generate more content will hallucinate that research, it can be documented and flagged for users to be aware of. Having more information about how to use these tools and having it available will help ensure it can be used properly.

Threats

As much as allowing AI use for non-native English speakers is a strength, given the current capabilities and policy language, they are actually facing some of the biggest threats. With English the dominant language for research publication and the vague guidelines around what is acceptable use, non-native English researchers are incredibly vulnerable. They aim to have a huge benefit from being able to use these tools to write in English and at the literacy level required of them. However, right now these tools are flawed and could cause further harm for users who do not have English language proficiency to know the AI has caused an error, nor the proper training, tools, and guides on how to use AI effectively.

Much of this comes down to the detection of AI in writing. In their research on AI use and the impacts on non-native English speakers Wee and Reimer concluded “The current tools are unable to reliably detect authenticity for non-English writing, indicating a potentially large issue for academia . . . The use of AI-aided translation and paraphrasing tools, which, on the surface, could help increase one’s accuracy in English writing, resulted in AI countermeasure tools flagging works as AI-aided essays, putting non-English academics at a disadvantage” (Wee). With pressures to publish research in order to prove credibility and as the current currency for academic/career advancement, the potential that using tools could hinder one's ability to progress is detrimental. These detection tools are unreliable for everyone. If a researcher is not properly trained to use AI tools in writing their work, they too could get flagged for AI use and, if the publisher policy they follow does not include disclosure of use, their credibility could be harmed.

Implications for Trade Publishing

            There is a large distinction to be made between academic publishing and trade publishing. I think much of it comes down to creativity and function. While academic works are worried about facts, plagiarism, and credibility, trade authors have a much murkier battle with AI use. It is difficult to put the same requirements on an author to prove their plots, characters, and stylistic voice are not plagiarized when the ownership of these “original ideas” is so difficult to pin down. Where the science conducted in a research paper needs to either be completely original or properly cited, the same cannot be said for the sciences found in science fiction. No one is citing their source for faster-than-light travel, but this technology exists across the genre. Tropes that readers expect in genre fiction cannot be copyrighted, so it is harder to clarify how an AI generated idea or suggestion fits into ownership.

As discussed with Thad McIlroy, disclosure of use by all parties (author, publisher, etc.) is important for honest and ethical work with AI. By stating what tool was used and how, we can at least trace origins of the work. If a tool like ChatGPT, which was trained on stolen works, is used for idea generation then perhaps a DE to ensure that these ideas are being used in new or transformative ways is needed. If Chat is being used for copyediting, then the human author and editor should be consciously aware of how the author's voice is being written and if it is truly their voice.

Disclosure is not the whole solution though. Staiman notes that “Disclosure on its own doesn’t require authors to demonstrate how [they] verify the reliability of the outputs” (Staiman). Having a human involved in each step of this process is an important safeguard. I think the use of AI could make literary agents and acquisitions editors more important than ever (though perhaps more overworked than before), as they could be the mediator between author and publisher to help ensure these tools are being used ethically and appropriately. If a blanket statement in the acknowledgements section of a book thanking ChatGPT for helping the author draft ideas is the only acknowledgement it is given, and the use is not analyzed further, I fear this will only put the ethical dilemma on consumers (readers) to opt for ethical AI-use books. While I don’t want to bar anyone from being able to publish their book, I do think both author and publisher are responsible for the integrity of the book. They need to work together to finger out how AI tools best aid writers while also protecting the work created by other authors. I think increasing public understanding on how these tools work, how they were trained, and the pitfalls they have will help create a more informed consumer base.

Conclusions

Despite the idea that people could “opt out” of using AI, the reality is that there are many people who will use it. Creating policies, whether flawed or not, starts the conversation on how to properly use AI in publishing in an ethical, responsible manner. These policies can help professionals create resources and guides for AI use, inform authors about how AI works (and was trained), and inform the public—authors and readers—so they can make informed decisions. It is important to be mindful of the threats AI poses for authors of any kind and integrate these considerations into AI implementation. Using AI to write does not have to be a stigmatized practice, if done right. If done honestly and to uplift an author’s own creativity, it could be a wonderful tool to add to a writer’s repertoire

Works Cited

Chemaya, Nir. Martin, Daniel. “Perceptions and detection of AI use in manuscript preparation for academic journals.” PLOS ONE, 19(7): e03048, July 12, 2024, https://doi.org/10.1371/journal.pone.0304807

Khedkar, Sneha. “Using AI-powered tools effectively for academic research.” Editage Insights. Editage. Sept. 13, 2023. https://www.editage.com/insights/using-ai-powered-tools-effectively-for-academic-research?refer=scroll-to-1-article&refer-type=article

Rawat, Seema. Meena, Sanjay. “Publish or perish: Where are we heading?” Journal of research in medical sciences : the official journal of Isfahan University of Medical Sciences, vol. 19,2: 87-9, Feb. 2014, https://pmc.ncbi.nlm.nih.gov/articles/PMC3999612/

Staiman, Avi. “Dark Matter: What’s missing from publishers’ policies on AI generative writing?” Digital Science. Digital Science & Research Solutions. Feb. 1, 2024. https://www.digital-science.com/tldr/article/dark-matter-whats-missing-from-publishers-policies-on-ai-generative-writing/

Wee, Hin Boo. Reimer, James D. “Non-English academics face inequality via AI-generated essays and countermeasure tools.” BioScience, Oxford University Press, 2023, vol. 73, no. 7, pg. 476–478. https://doi.org/10.1093/biosci/biad034

Publishers’ Policies

ACM Publications Board. “ACM Policy on Authorship.” Association for Computing Machinery. Apr. 20, 2024, https://www.acm.org/publications/policies/new-acm-policy-on-authorship

AIP Publishing. “Ethics for Authors.” AIP Publishing LLC. https://publishing.aip.org/resources/researchers/policies-and-ethics/authors/

BJR. “Instructions to Authors.” British Journal of Radiology. Oxford Academic. https://academic.oup.com/bjrai/pages/general-instructions?login=false

Cambridge University Press. “Cambridge launches AI research ethics policy.” Cambridge University Press & Assessment. Mar. 14, 2023, https://www.cambridge.org/news-and-insights/news/cambridge-launches-ai-research-ethics-policy

De Gruyter. “Publishing Ethics.” De Gruyter. https://www.degruyter.com/publishing/for-authors/for-journal-authors/publishing-ethics

Elsevier. “Generative AI policies for journals.” Elsevier. https://www.elsevier.com/about/policies-and-standards/generative-ai-policies-for-journals

Emerald Publishing. “Emerald Publishing’s stance on AI tools and authorship.” Emerald Publishing. Feb. 22, 2023, https://www.emeraldgrouppublishing.com/news-and-press-releases/emerald-publishings-stance-ai-tools-and-authorship

Frontiers. “Author Guidelines.” Frontiers Media. https://www.frontiersin.org/guidelines/author-guidelines

ICMJE. “Defining the Role of Authors and Contributors.” International Committee of Medical Journal Editors. https://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html

IEEE. “Submission Policies.” Institute of Electrical and Electronics Engineers. https://conferences.ieeeauthorcenter.ieee.org/author-ethics/guidelines-and-policies/submission-policies/

Editorial Office. “Innovations Journals Policies and Processes.” Innovations Journals. Allen Press, Sept. 6, 2024, https://meridian.allenpress.com/innovationsjournals/pages/Policies-and-Processes

JACS. “Author Guidelines.” Journal of the American Chemical Society. ACS Publications. Oct. 29, 2024. https://researcher-resources.acs.org/publish/author_guidelines?coden=jacsat#authorship

JGME. “JGME Ethics Policies.” Journal of Graduate medical Education, Allen Press, https://meridian.allenpress.com/jgme/pages/ethics_policies

MIT Press. “Current Authors.” MIT Press. https://mitpress.mit.edu/for-authors/

Nature Portfolio. "Artificial Intelligence (AI).” Springer Nature, https://www.nature.com/nature-portfolio/editorial-policies/ai#ai-authorship

PLOS ONE. “Ethical Publishing Practice.” PLOS. https://journals.plos.org/plosone/s/ethical-publishing-practice#loc-artificial-intelligence-tools-and-technologies

Princeton University Press. “Princeton University Press statement on artificial intelligence.” Princeton University Press. Dec. 6, 2023, https://press.princeton.edu/news/statement-on-ai

Sage. “Artificial Intelligence Policy.” Sage Publications. https://uk.sagepub.com/en-gb/eur/chatgpt-and-generative-ai

Stanford University Press. “AI Policy.” Stanford University Press. https://www.sup.org/about/ai-policy

Taylor & Francis Newsroom. “Taylor & Francis Clarifies the Responsible use of AI Tools in Academic Content Creation.” Informa UK Limited, Feb. 17, 2023, https://newsroom.taylorandfrancisgroup.com/taylor-francis-clarifies-the-responsible-use-of-ai-tools-in-academic-content-creation/

Wiley Author Services. “Best Practice Guidelines on Research Integrity and Publishing Ethics.” John Wiley & Sons, https://authorservices.wiley.com/ethics-guidelines/index.html#22

back
Next
Next

Life Under the Sea