by Kevin Breen
This article is the fifth in a biweekly series examining AI and its implications for publishers and authors.
Reactions to genAI within the literary world have varied widely, ranging from enthusiastic acceptance to staunch opposition. In this article, we’ll look at literarians who are actively and publicly shaping their positions on this evolving technology. After examining these case studies from the literary world, we’ll consider what this means for small presses. That way, you and your organization can be better equipped to reach a genAI stance before one is foisted upon you.
In publishing, three types of relationships to genAI have naturally emerged: those in favor, those opposed, and those seeking codification—no matter if the literary entity is “for” or “against” the new technology.
Embracing genAI’s influence
First, let’s look at those who have welcomed genAI’s influence on books and why that might be the case. Some proponents, for example, have framed genAI as a democratizing tool. In an op-ed published in The Bookseller, Shimmr AI’s founder Nadim Sadek argues that for millenia, “expressing creativity meant mastering a craft.” He sees this as an unfairness: if each human is creative, then tools like generative AI simply reduce the craft barrier for humans to tell their stories and express themselves. In his view, genAI helps to liberate global creativity.
Similarly, the now defunct NaNoWriMo, whose flagship program was sponsoring National Novel Writing Month, published a statement in September 2024, stating that the “categorical condemnation” of AI has “classist and ableist undertones.” Though the organization went on to qualify their comments in response to an outpouring of criticism, NaNoWriMo’s original articulation aligns with the sentiment from Shimmr AI’s founder—AI tools increase access and radically democratize storytelling.
Of course, the broad appeal of genAI applies to small presses. Much of the promotional copy written about tools like ChatGPT and Gemini centers on freeing workers from rote tasks. This rhetorical approach imagines a world where advertising emails, marketing copy, or metadata could be created and employed without human input. After all, small presses are not known to have deep reserves of free time or funding. A technology that reduces overhead, improves efficiency, and outsources boring “admin” work seems perfect. But like many industrial shortcuts—such as murky labor standards, questionably sourced materials, technology that invariably replaces human laborers—the material benefits must be weighed against the ethical costs.
Opposing the encroachment of large language models (LLMs)
Others in the publishing world have expressed staunch opposition to genAI’s influence. Some presses have worked anti-AI statements into their “about” pages and into calls for submissions. Tenebrous Press, for example, states: “There will be no machine-generated stories, editing, art, or narration…. We will maintain anti-machine-generation clauses in our contracts. We will introduce machine-gen-free disclaimers in our copyright pages.” Likewise, member organizations, such as the Authors Guild, have made their AI opposition clear and apparent for stakeholders. In October 2024, the Authors Guild announced plans for a “human authored” label that members could append to books created without the assistance of LLMs.
These kinds of public gestures, in mission statements and on book labels, aren’t likely to carry legal or contractual weight. Authors Guild CEO Mary Rasenberger acknowledged that writers “will not have to submit their manuscripts to AI detection software to receive the ‘human authored’ stamp.” Still, these kinds of readily available, public stances help likeminded literary citizens find one another.
Acknowledging AI’s current reach—and codifying a response
Many organizations have focused their efforts on codifying a response to tools like ChatGPT and Google’s Gemini. These institutions acknowledge genAI’s increasing prevalence in the publishing world while seeking to put guardrails on its use. One such member organization is The Authors’ Licensing and Collecting Society (ACLS), a British organization working to ensure fair compensation for authors. They recently released a report on licensing, remuneration, and transparency pertaining to AI. In the report’s foreword, CEO Barbara Hayes writes: “It is our belief that licensing offers the best solution for ensuring authors are recognised and fairly compensated for the use of their work in AI systems, if that is what they choose to do.” The ACLS report, which synthesizes feedback from over 13,000 surveys, goes on to say that 81% of respondents “would want to be part of a collective licensing solution if ALCS was able to secure compensation for such cases, and where case-by-case licensing was not a viable option.” The Authors Guild has created a similar option for its members by partnering with Created by Humans (CbH), a platform that enables authors and publishers to license their work to AI developers. Formalized licensing agreements such as these target a fundamental criticism of genAI: that it is rooted in piracy and plagiarism. Still, staunch opponents of genAI might argue that inking agreements, as a means of avoiding intellectual theft, cedes important ground to entities that build large language models. Some publishers and authors might feel as though they can either be victims of theft or they can opt into licensing agreements: opting out entirely can seem less realistic for literary entities.
Some small presses have prioritized transparency over winding back the clock on AI, inviting submitting authors to leverage genAI as long as they are clear about how it was used. For example, the submission guidelines for New Michigan/DIAGRAM’s chapbook contest say that manuscripts using “algorithmic writing tactics (cut-ups, collage, erasure, large language models, AI chatbots, autotranslation softwares, etc.) are fine as long as you’re up front about their use.” In fact, New Michigan/DIAGRAM chose as its contest winner Lillian-Yvonne Bertram’s collection, A Black Story May Contain Sensitive Content, a work composed with the use of ChatGPT.
Many publishers have begun to include AI-specific language on the copyright pages of their titles. These disclaimers are meant to spell out the proper usage of the published text and seek to protect the author’s and publisher’s agreement. In late 2024, Penguin Random House changed its standard copyright page to read: “No part of this book may be used or reproduced in any manner for the purpose of training artificial intelligence technologies or systems.”
Similarly, consider this language from the copyright page of Brightly Shining, a Grove Press title published in November 2024, written by Ingvild Rishøi and translated to English by Caroline Waight: “Any use of this publication to train generative artificial intelligence (“AI”) technologies is expressly prohibited. The author and publisher reserve all rights to license uses of this work for generative AI training and development of machine learning models.”
Such language expressly permits any AI tool to ingest Brightly Shining. At the same time, it leaves open the possibility for a licensing agreement in the future.
Takeaways for small presses
So what can small and independent presses learn from the above examples?
For one, there seems to be an evident difference between what a publisher can say about AI and what they can do about it. Even an organization as robust as the Authors Guild would have trouble certifying the legitimacy of “human authored” titles.
That isn’t to say a publisher’s words and actions are futile. In fact, clearly and consistently voicing your press’s stance on AI will allow likeminded authors, readers, and book lovers to find their way to your titles. A genre publisher may be comfortable and eager to work with an innovative, prolific novelist who uses genAI to help them produce multiple manuscripts per year. Alternatively, a small press that effectively communicates its anti-AI stance might endear itself to emerging authors and indie booksellers who feel similarly.
Meanwhile, it’s important for publishers to shore up and codify what they can. Updating publishing contracts, spelling out who has the right to make choices about AI licensing agreements, and adding AI disclaimers to copyright pages: all these steps can help a publisher protect their titles and their authors. The landscape of genAI is constantly shifting; fortunately, the agility and size of most indie presses allows them to adapt, change, and respond in kind.
Kevin Breen lives in Olympia, Washington, where he works as an editor. He is the founder of Madrona Books, a small press committed to place-based narratives from the Pacific Northwest and beyond.