Taking another look at genAI for 2025
Updated thoughts on the most tedious of all the tech bubbles
It’s been a while since I’ve written about AI in detail, largely because it’s overhyped and dull. There is value in keeping an eye on it, though, and I want to be open to new developments. So here we are.
The short version of my evolving thought process with generative AI is this:
MidJourney appears mid-2022, I get very excited
I go all-in and start using it to illustrate my weekly serial (you’ll still find remnants of this in the archive — at some point I will go through and strip those images out)
The more I research about generative AI’s ethical and environmental issues, the more my face starts to go 🤔
Later in the year, ChatGPT comes out and my writer response is “whoa! Hold on! Images are fine, but WORDS? That’s my domain!”
I fall into a pit of hypocrisy and spend some time fashioning a ladder with which to climb out (I made the ladder myself, I didn’t use AI, which made it quite satisfying, actually)
In early 2023 I explain why I won’t be using genAI going forward
June 2024, and I published ‘It’s not artists who should fear AI’, which becomes one of my most popular articles
It’s 2025! The future! Tech companies continue to argue that it’s fine to take copyrighted material on a mass scale to train their product, so that they can then make loads of money from that product — even though similar corporations spent the previous fifty years pursuing individuals for comparatively tiny piracy and copyright breaches.
Whether one thinks the hoovering up of other people’s work without permission is theft or not is a matter of semantics. We’re talking about huge companies using other people’s hard work, without permission, in order to make lots of money, regardless of the label.
Comics creator Jamie McKelvie put it fairly succinctly over on Bluesky, I thought:
Part of my objection around generative AI, as pushed by OpenAI, Meta, Microsoft and the rest of the techoligarch brigade, is that it’s repeatedly positioned as a way to replace artists. The people who talk about genAI the most are manager-level types who seem to abhor that they have to work with creative people in order to get anything done. Creative people are a roadblock, a cost, an inconvenience, and they see LLMs as a way to bypass them.
‘Anyone can write a book’ is code for ‘we don’t need writers’. ‘Anyone can code’ is, er, code for ‘we don’t need coders’. ‘Anyone can create images’ is code for ‘we don’t need artists’. ‘Democratisation’ is code for ‘consolidation’.
It reveals a misunderstanding on their part of what creativity is, how it works and how it gives objects value. Which shouldn’t be a surprise — that’s why these people are executives and managers in the first place, rather than creative practitioners.1
Annoyingly, it didn’t have to be like this. There’s an alternate timeline out there somewhere where tech companies from the very beginning involved artists and the creative industries in the development of AI. That as soon as the tech become a possibility, they worked with rights holders to acquire access to libraries and training data through legal, mutually agreed contracts. In that scenario there would have been a way to properly audit the origins of AI generated material, all the way back to the training data, and to pay royalties to the creators of the original training data.2
It didn’t have to be a situation in which the tech and creative industries are at each other’s throats and always in opposition. There could have been cooperation. That’s an enticing vision that is explored by Dr Jeremy Silver in ‘Copyright and AI – a new AI Intellectual Property Right for composers, authors and artists’ over on the Creative PEC blog. I find it hard to see that becoming a reality now, at least while the current childmen ‘leaders’ are at the top of the tech companies.
The Guardian and other media outlets happily regurgitated an OpenAI press release recently, in which Sam Altman (CEO of OpenAI) blathers on about how he was really impressed by some ‘creative writing’ generated by his product. It’s the usual nothing-hype designed to con investors out of cash and keep the train rolling, but it also highlights the key issue with the central concept of generative AI (at least as envisioned by Altman and his cronies).
Here’s how I see it:
They seem to always miss that the process is the important bit of being creative. The act of doing it is what makes it interesting and rewarding. The final output is a useful, vital record of what happened, but it’s a light on the cave wall: it’s not the light itself. Value is created in the process of making the thing — that’s why a Picasso sells for millions, but an exact replica is largely worthless. The painting itself, the image, has no inherent value as an object, but Picasso’s unique act of creation imbues it with meaning and value.
The object is what you see, but the process is what you feel.
That extends to everything else: music, books, movies, comics, radio, TV.
It’s weird to me that the tech bros don’t get this. Being able to create text and images without any effort3 is a fun novelty, and may have applications in some business contexts. But it’s not the same as being creative. An AI-generated work of fiction has the same value as a photocopy of that Picasso painting. By skipping the creation process, which imbues it with a point of view, the end product exists only as a facsimile. There’s nothing beneath the surface to give it meaning.4
I started thinking about all this again when I stumbled upon this little thought experiment:
My initial response was ‘big yes’. If I could choose between two equally good games, but one of them used AI to generate dialogue, I’d go for the one that didn’t. Same way if I have two Substack articles to read, and one of them uses an AI generated illustration, I’ll pick the one that doesn’t. In a world of infinite stuff, we all have to make choices and filter things out, and we don’t have time to read everything: discarding AI-generated material is a simple first step.
Maybe it’s not that simple, though. What if you can’t tell? In that thought experiment, what if Bioware didn’t disclose that they’d used AI?5 If I can’t even tell the difference, then why am I making such a fuss? Why does it matter in the first place?
It’s a good question. I don’t know if I have an answer just yet. Currently, generative AI is still fairly awful when it comes to creative output, and easily spotted, but that won’t always be the case. Maybe that’s where ‘no AI’ badges come in, although even then it can only really be done on a trust basis.
Here’s the thing, though: I have favourite authors and game designers and artists and filmmakers not just because of their final output, but because of their point of view, and their journeys towards creating those things. I like their work, but I also like them, to some degree. As people. I like that I could, theoretically at least, go have a chat with Kieron Gillen, or J Michael Straczynski, or Alex Garland, or Jon Ingold, or Tom Francis or
. I could talk to them at a con, or down the pub, or invite them to do a podcast interview, because they are real humans. That background thought enriches the work, for me at least. Even with authors who are dead, their work is still enhanced by the knowledge that they once lived. How amazing that I can access their brains and thoughts and memories, years after they died?In her BBC documentary The Third Information Crisis, author Naomi Alderman explores how writing — and creating stuff generally — is a direct link to the past and the future. It’s a time machine, a way to very slowly communicate with people over a span of hundreds of years.
Communicating with a large language model that spun up a homogenised story in five seconds somehow lacks that same sense of wonder.
‘AI’ is a silly term that means everything and nothing. Forms of AI have been used in various industries for many years, prior to the emergence of ChatGPT-style genAI. There are many fascinating and vital applications of AI in manufacturing, healthcare, filmmaking, video games and more. In all those cases, the tools have been developed for the people already working in those areas, to enhance their work.
Which brings me to a recent edition of
’s round-up, which highlighted the music video for ‘A Love Letter to LA’ by Cuco. Here it is:The techniques used to make the video are really interesting. The visual style was defined by a single artist, and AI was then used to extrapolate that core design across many more assets. I don’t know whether this was for budget or time reasons, or perhaps the artist wasn’t interested in working on the project for longer than was necessary, but it’s an example of the tech being used by artists to do something which would have been difficult otherwise. AI was also used to enable the animators to steer close to the key artist’s visual style.
Here’s a behind-the-scenes:
It’s a project where AI is being used by artists to create something that feels deliberate, and has a point of view, rather than AI being used as a way to bypass artists. The creative decisions are all being made by skilled artists, rather than being contracted out to an AI.
There are still all sorts of thorny questions, of course: should the production team simply have paid the key artist to work for longer on the project, so that all the assets were produced by them? What happens if the company continues to produce work based on the artist’s unique visual style without their involvement? What happens if they use the same technique to create a project based on another artist’s visual style, but without involving them at all? Is this still leading to an endgame in which artists are excised from the process, even if this specific project was artist-led?
I suppose these ethical questions are more about creative processes generally, rather than being specific to AI. There’s always been the possibility of plagiarism, copyright theft and idea lifting, even if AI has made it easier to do on a monumentally huge scale.
The problem, I think, is that debates around genAI have never been honest and genuine, because the creative industries and artists have been shut out from the beginning. It’s never been a legitimate debate, because it’s been driven entirely by the tech companies, for whom profit and growth are all that matters. They’re talking to venture capitalists, not to artists or users. It’s been a disingenuous discussion since 2022, and we can see that dishonest discourse seeping into government policy and the icky deals being made between AI firms and legacy media companies.
If we do have a hype bubble burst, and a collapse of the VC-funded house of cards, perhaps we’ll then be able to pick up the pieces and move forward in a more productive and human way. A tools-first approach.
What this means for me is that I won’t be using generative AI anytime soon. I have no obvious use for it, aside from anything else, especially in my creative practice. But you can bet I’ll be keeping an eye on it.
Projects like the ‘A Love Letter to LA’ music video might be a glimpse of what could have been, and perhaps still could be, if AI had been developed and championed with and by artists in the first place. It’s felt for a while that the 2022-2024 absurd-o-hype has peaked; perhaps in the coming years we’ll start to see more examples of the tech being used in a non-cynical way.
I won’t be holding my breath, though.
Meanwhile.
If you’d like to check out what I do, you can find my fiction here:
Tales from the Triverse story index
If you’re looking for my non-fiction writing guides, video tutorials and community discussions, you can find the most popular articles here!
It’s an ambitious merging of science fiction, fantasy and crime fiction. Free to read until I wrap it up later this year, at which point it will likely go behind the paywall.
There’s also a small number of us rewatching 90s scifi Babylon 5, as an example of highly successful serial fiction. And I have another, very sporadic blog about movies, video games and other stuff over on
.#notallmanagers, etc — I’m aware that many producers do wonderful work and enable their creative partners to do their best work. Those are not the people you tend to see wanging on about generative AI.
I don’t know much about blockchain, another semi-useless, over-hyped technology. But surely this would have been a perfect, real world use case? Creating a proper digital ledger of generative AI sources?
No, ‘being good at prompts’ does not count.
No, having an idea for the prompt doesn’t make the end product interesting or have value. Ideas are easy, ideas are everywhere. Everyone has ideas. It’s the creative translation of an idea into a new thing that is the special sauce.
A similar situation came up recently, with publisher Activision admitting after the fact that they’d used genAI in the latest Call of Duty game.
So, yeah, all of this.}
You can see in my response to someone else's comment how my quick ChatGPT 4 test of a few minutes ago went. Spoiler - the LLM is still incompetent, and I wouldn't trust it to tell me a synonym for "incompetent."
But it's programmed to be very apologetic about it's incompetence and occasionally give disclaimers about its incompetence.
Excellent article and some very interesting points. My favourite author is now a best friend and who pointed me to you. It's not just the stories, but the message behind them, which is why we connected so well. Other authors I follow closely I also chat with occasionally.
Those that just pump out the same stuff at a rapid rate, I got bored with quickly, and I expect we will just see more of that from AI.
As an ex-programmer I don't see how AI can truly learn for itself and invent anything new and creative. I also don't think that is actually the point, independent thought is not what they want, control is and not having to continually pay you for anything.
Amazon and Facebook don't create anything, they make their money off what others create!