AI 101 Series, 6: GenAI News Affecting Publishers and Authors


by Kevin Breen


This article is the sixth in a biweekly series examining AI and its implications for publishers and authors.


Trying to get a handle on all the current publishing/genAI news is like drinking from a firehose. To help you get your bearings, here are some foundational sources related to AI and publishing: articles covering questions about authorship, op-ed perspectives from our peers, licensing efforts and stumbling blocks, as well as ongoing litigation.

GenAI authorship

Ars Technica, Benji Edwards: “Chicago Sun-Times Prints Summer Reading List Full of Fake Books.”

Two-thirds of a Chicago Sun-Times insert, advertising summer reads, was made up of fake books attributed to real authors. The creator of the list, Marco Buscaglia, confirmed that he used AI to generate the content. From Buscaglia: “I do use AI for background at times but always check out the material first. This time, I did not… On me 100 percent ….”

AI Business, Deborah Yao: “Author Admits to Using AI to Write Award-Winning Novel.”

Rie Kudan’s novel won the Akutagawa Prize for emerging authors in Japan. Kudan says about 5% of her novel was authored by genAI. Meanwhile, Tsinghua University professor Shen Yang used a LLM to write The Land of Machine Memories. Their novel earned second prize in a competition put on by the Jianguse Popular Science Writers Association in China.

Jane Friedman: “How AI-Generated Books Could Hurt Self-Publishing Authors.”

This article explores the guardrails being applied to the proliferation of low-quality, AI-generated books, a surge which risks drowning out other titles, especially self-published books from print-on-demand platforms. Amazon’s KDP now permits users to upload a maximum of three titles per day. Meanwhile, “retailers and distributors like Barnes & Noble and IngramSpark have been creating stricter policies” to curb the rise in AI-generated books.

Vice, Luis Prada: “‘Human Authored’ Book Certifications Are a Thing Now Thanks to AI.”

The Authors Guild is offering a “human authored” badge to members for the covers of their books. That way, bookbuyers and readers can easily differentiate the human-authored from the AI-generated. (Note: the Authors Guild will not be implementing a certification or vetting process. This is a self-selection tool.)

Perspectives from literary citizens

The New Yorker, Ted Chiang: “Why AI Isn’t Going to Make Art.”

From the author: “The task that generative A.I. has been most successful at is lowering our expectations, both of the things we read and of ourselves when we write anything for others to read. It is a fundamentally dehumanizing technology because it treats us as less than what we are: creators and apprehenders of meaning. It reduces the amount of intention in the world.”

Lit Hub, James Folta: “Publishers are already using way too much AI.”

Folta summarizes the findings of an AI report from Bertelsmann, a German media company that owns Penguin Random House. The report chronicles usage of genAI across publishers like Hachette, Harper Collins, as well as Penguin Random House’s advertising division. 

Substack, Rebecca Makkai: “We Have Nothing to Fear From Our Robot Overlords.”

From the author: “[AI] can actually teach us to write better. And not in the way you think…. The AI might occasionally be weird…but it isn’t original. It’s caught forever in literary adolescence.”

WIRED, Vauhini Vara: “Confessions of a Viral AI Writer.”

The writer describes their experience of writing a personal essay about grief, assisted by ChatGPT. The essay was subsequently published in The Believer and adapted into a piece for This American Life. “Artificial intelligence had succeeded in moving me with a sentence about the most important experience of my life.”

Licensing efforts between publishers and genAI companies

The New York Times, Michael M. Grynbaum and Cade Metz: “The Times and Amazon Announce an A.I. Licensing Deal.” On May 29, The Times reported it has agreed to license its editorial content to Amazon “for use in the tech giant’s artificial intelligence platforms.” Financial details of the arrangement were not reported, and Amazon has declined to comment beyond the statement issued in The Times.

The Bookseller, Heloise Wood: “Wiley and Oxford University Press confirm AI partnerships as Cambridge University Press offers ‘opt-in.’” In the summer of 2024, three academic publishers reached licensing agreements with AI companies, so that LLMs could be trained on the companies’ books. (Later, Wiley confirmed the $44 million deal would not have opt outs for their authors.)

Bloomberg, Hannah Miller and Dina Bass: “Microsoft Signs AI-Learning Deal With News Corp.’s HarperCollins.” In November, HarperCollins reportedly reached a deal with Microsoft that would allow the tech company to use “select nonfiction backlist titles for training AI models.”

The Authors Guild press release: “Authors Guild Partners with Created by Humans to Empower Authors in the AI Era.” From the Guild: “Without licenses, authors and publishers have no means of controlling LLM uses, including end-user outputs that incorporate their works.” To that end, this partnership aims to offer authors (and publishers) a way to “control, manage, and monetize their content” when it comes to LLM ingestion.

Litigation and legal defenses

The New York Times, Cate Metz: “Anthropic Agrees to Pay $1.5 Billion to Settle Lawsuit With Book Authors.” On September 5, 2025, Anthropic, a leading artificial intelligence company, agreed to pay $1.5 billion to a group of authors and publishers after a judge ruled it had illegally downloaded and stored millions of copyrighted books. Several days later, a federal judge skewered the settlement, raising the possibility that the case might end up going to trial.

TechCrunch, Anthony Ha: “Jack Dorsey and Elon Musk would like to ‘delete all IP law.’” In April 2025, Jack Dorsey (co-founder of Twitter) wrote online, “delete all IP law.” Musk quickly replied, saying “I agree.” Underpinning many creators’ fears surrounding AI is the concern that their authorship will be discounted or stolen by large language models.

The Bookseller, Matilda Battersby: “Penguin Random House underscores copyright protection in AI rebuff.” Penguin Random House changed its copyright-page disclaimers to help protect authors’ intellectual property from being used as LLM training data without their express permission.

NPR, Bobby Allyn: “‘The New York Times’ takes OpenAI to court. ChatGPT’s future could be on the line.” This January 2025 article recaps the ongoing legal battle between The New York Times  and OpenAI. The Times argues that OpenAI both stole data for training and that it meaningfully competes with NYT services. Meanwhile, OpenAI asserts that what they’ve done is protected by fair use.

Reuters, Aditya Kalra, Arpan Chaturvedi and Munsif Vengattil: “OpenAI faces new copyright case, from global book publishers in India.” This article looks at the growing legal battle facing OpenAI in India. The New Delhi-based Federation of Indian Publishers has recently filed against OpenAI. The federation’s members include companies like Penguin Random House, Pan Macmillan, and Bloomsbury.


Kevin Breen lives in Olympia, Washington, where he works as an editor. He is the founder of Madrona Books, a small press committed to place-based narratives from the Pacific Northwest and beyond.