I’d vaguely heard of chatbots but had no real understanding of what they were or what they could do. Then, in February 2023 Neil Clarke, of Clarkesworld Magazine issued a statement that he was temporarily suspending submissions due to an unprecedented influx of AI generated stories. That got my attention.
Reactions of writers to the rising use of AI apps in writing fiction have been mixed. I’ve heard everything from “it’s the end of the world as we know it” to “cool, I can use this tool to churn out a book every month.”
The name on everyone’s lips seems to be ChatGPT (and that’s what I’ll mostly refer to), but there are many other AI apps targeted specifically at creative writing. For example: Sudowrite; Jasper; Rytr.AI; Quillbot; Sassbook; and Grammarly.
At the end of May 2023, a quick search of Amazon showed 225 books on ‘ChatGPT for Writers’, and over a 1000 published books with ChatGPT listed as an author. It’s impossible to know how many others have been produced using some form of AI.
What is very clear is that this genie cannot be stuffed back in its bottle.
Because I believe as writers and readers we need to inform ourselves, I spent two weeks doing research and having discussions with fellow authors. I presented what I found on the AI in Publishing Panel at the SFFANZ ReConnect2023 online conference held 3-4 June 2023. This post is based on the panel discussion.
What ChatGPT (et al) produces
Even though ChatGPT is referred to as an AI (artificial intelligence) it’s not at all intelligent. It doesn’t think or have emotions. Very simply, it uses algorithms to predict likely next words. These predictions are based on the material it’s been trained on, which is basically the internet. So, it’s not surprising that AI responses: fall into tropes and predictable phrases; show cultural bias; and can be based on lies and hallucinations. Uncritical use of AI produced text is very unwise!
Acquisition of data.
ChatGPT (et al) works by producing text that is statistically likely to follow the text that came before it. It’s been trained to do that by scraping the internet for material, with no consideration for copyright. In my view this is unethical and undermines the entire purpose of copyright.
New Zealand does not recognise the ‘Doctrine of Fair Use’ as a defence for copying copyrighted works. The Copyright Act 1994 permits certain activities in relation to copyright works, these include: incidental copying; fair dealing with a work for the purpose of criticism, review and news reporting; fair dealing with a work for the purpose of research or private study; and, where the copying is for educational purposes and follows the requirements of the Act.
Copyright protects the expression of ideas or information, not the underlying idea itself. It is likely that an infringement case in New Zealand would revolve around whether the use of copyright works by the AI amounted to copying a substantial part of the works – i.e., has the AI copied an important or distinctive part of an earlier copyright work to create something new?
It's possible ChatGPT could respond with ideas or even wholesale phrases provided by other writers, including writers who didn't consent to giving ChatGPT their data. At this time creatives have no ability to opt-out of having their products used to train ChatGPT.
There is at least one class action taking place in the US in the visual arts where artists are suing for breach of copyright. The outcome could have implications for writers.
Authors should be compensated when their works are used in training of generative AI, and AI developers should disclose what works they use to train their AI. Authors should have the right to opt out.
Use of AI needs to be transparent.
Unless an author using AI acknowledges they are doing so, there are limited ways to know a work is not entirely the product of human endeavour. Yes, for now, there are programmes to assess work, but they can throw up false positives, e.g., when authors are writing in a second language.
There are numerous ways in which AI can be used by authors, i.e., idea generation, plotting, writing chunks of text, character development, and grammar checking. Acknowledging the level of use would go a long way in retaining the trust of readers and publishers alike.
At the very least, authors, publishers, platforms, and marketplaces should be required to identify when a significant portion (e.g., more than 30%) of a written work has been generated by AI.
Copyright of creative works should be restricted to humans.
Wholly AI generated works can’t be copyrighted in the US, but in stark contrast, they’re automatically copyrighted in New Zealand (and the UK).
NZ copyright law expressly states that AI produced creations are covered by copyright. Originally, this was to protect computer generated models such as weather forecasts and the like.
But…for copyright to apply in New Zealand, there must be an original work and there must be an author. For work to be original, the author needs to demonstrate they've applied sufficient time, skill, and effort in creating it. Inputting a simple prompt is unlikely to be enough. No legal tests have been made in New Zealand, and it seems unlikely to happen any time soon.
It’s already difficult to make money as a writer. Society doesn’t place much value on art. By devaluing writers and crowding the market, AI could further erode what little we earn.
Currently, there’s a free to use version of most AIs, but there are increasing scales of cost with added features. Companies are making money from data sets compiled through non-consensual scraping of the internet – writers who created the source material are not.
I’ve worked hard and spent a lot of time and money to develop my writing skills and create my brand. If someone uses the prompt, “Write a 1000-word fantasy story in the style of Jacqui Greaves” I think I should be compensated. Why should someone else be able to profit without consequence by using my voice (without my consent and potentially bringing harm my brand)?
There should be a requirement for permission and compensation for authors when their works are used in outputs, or when their names or identities or titles of their works are used in prompts.
When thinking about submitting stories written with the assistance of AI, writers need to be aware of the AI policies of publishers.
Some, have very clear policies:
Clarkesworld Magazine states;
“We will not consider any submissions written, developed, or assisted by these tools. Attempting to submit these works may result in being banned from submitting works in the future.”
QueerSciFi annual flash fiction contest rules say;
“We do not allow submission of work that is partially or entirely generated by Generated Artificial Intelligence (generative AI).”
Other publishers have statements along the lines of ‘we only accept submissions from humans made of meat.’
In contrast, Space and Time Magazine had a recent call for their Friend or Fiend? AI and Human Creators: Special Edition. They invited submissions demonstrating how AI can enhance human work, but not replace it. “To demonstrate that human creativity is vital and can not be replaced by AI, but it can free us from tedious tasks so we have more time to create.”
In the already murky world of self-publishing (a space I inhabit) it’s a bit of a free for all. I predict it will become even more difficult to rise above the dross. I’ll be adding an AI statement to my social media profiles and the metadata of my online publications, and hope I’ve built enough trust with my readers.
Readers have a role to play. If readers want works created with heart and nuance, with original themes, characters, and settings, they need to insist that their bookstore (physical or virtual) has a clear AI disclosure policy.
If there’s no market for fiction largely or wholly produced by AI, then there’s no reason for a supply. This only works where authors and publishers disclose when AI has been used in the production of a creative piece of work. As stated above, this may be challenging in the self-publishing arena.
Readers may need to become more discerning in which authors they read and take it upon themselves to do their due diligence.
And, for those of you happy to read AI generated stories, you do you!
Environment: not directly related, but an interesting aside
Forbes reported that training a single AI model results in the emission of more than 283,000 kg of carbon equivalent (around the annual production of 5 cars).
There are also issues around the enormous quantities of water required to cool processing centres and the ongoing challenge of e-waste disposal.
Websites I visited while researching