Why I cancelled my MidJourney subscription and don't use ChatGPT
Or: Writing about responsibility, AI and parenting
I was thrilled by MidJourney back in 2022. It was a way for me to illustrate my writing in a more visceral way than anything I could manage on my own. I couldn’t afford to hire an illustrator and my own skills weren’t up to the task. If you look through the Triverse archive you’ll find lots of AI-generated images, some used as-is, others further manipulated by me.
It was an exciting time!
Last week I cancelled my MidJourney subscription, so I won’t be using AI images in Triverse or on this Substack. There are three reasons for this:
There’s ongoing ethical ambiguity around AI generation. Until that conversation’s developed or resolved, using AI images feels a bit icky - and I might also be opening myself up to copyright complications.
I received a bunch of art supplies for my birthday and Christmas, which served to remind me how much I love drawing. Even if I’m not very good at it, even if my efforts can’t possibly match the technical complexity of MidJourney, I realised that I was really missing having that creative outlet. Leaning on MidJourney meant I was doing less of my original art.
Increasingly, I’ve found MidJourney output (and AI generated material in general) to be a bit…dull. Once the technical cleverness wears off, the images leave me feeling a bit empty. I don’t want that to be the feeling readers get when they stumble upon my Substack.
In other words, I’d rather use less competent but more honest illustrations drawn by my own hand (or, ideally, by a hired human illustrator, but that’s outside of my budget currently). It’ll be a better expression of who I am as an artist. Plus, that’s the only way to improve. After all, I’ve become a better fiction writer primarily through writing lots of fiction. Visual art is the same.
That dullness is something
wrote about in relation to ChatGPT right back at the start of its release:Essentially, the worst case scenario is that generative AI poses a genuine threat - either to creative industries, or to humanity as a whole, depending on how hysterical you’re feeling in the moment. At best, generative AI will become a useful assistant tool (if the ethical issues are ever worked out). That dullness is certainly more evident with ChatGPT than it is with MidJourney: I’ve generated some really exciting images with MidJourney, even if they’ve felt somewhat hollow after the fact. They lacked meaning, but were still visually impressive. ChatGPT is hollow and boring.
In fact, I found Erik’s point especially fascinating: that ChatGPT primarily highlights how incredibly boring a lot of humans are, especially in the world of professional copywriting. A positive scenario would be if ChatGPT spurs a new wave of more creative writing across disciplines, as humans attempt to sound less like a non-sentient computer.
Ted Chiang wrote about this in the New Yorker recently. ChatGPT Is A Blurry JPEG Of The Web is not only a useful explanation of how some of this large language model AI works, it also elucidates why the results are so uninspiring - despite the technical achievement being really impressive. It’s a longish article but well worth a read, and is more even-handed than most.
Don’t panic
Erik Hoel wrote another article about AI just last week, which was quite different in tone to his previous:
It’s a startling and unsettling read. There is lots of discussion about whether Erik’s being hyperbolic and fear-mongering, or is in fact right on the money. That people aren’t sure I think says something in itself. I’m not really qualified to weigh in on either side of that particular debate, but it did remind me of something.
I’ve already written about all of this.
The current debates about AI and corporate responsibility are (sneakily) at the heart of No Adults Allowed, the story I serialised back in 2020 and which I released as a fancy ebook and paperback novel just recently.
You wouldn’t know it from the blurb, but one of the core story beats in No Adults Allowed is about an AI going rogue (or, rather: an AI doing exactly what it’s been told to do, but in an unexpected way). I’ve kept that largely out of the marketing materials, because the mystery of what’s happening has ben part of the fun of the book, but the intense debate about AI at the moment has made me realise that I should probably lean into the themes more.
There’s a specific chapter in the book, told from the AI’s point of view, in which it calmly annihilates most of the planet (I’ll put it down below as a sneak preview). When I was writing No Adults Allowed even just back in 2020, it felt very much like a theoretical science fiction prospect. Two years later and we have ChatGPT, Bing is sending threats to users and AI image generation has us questioning how we can ever know what is real. There’s been a rapidity to AI developments that I was not anticipating, and it doesn’t seem to be about to slow down.
In fact, the development of rival AIs reminds me of parts of the Cold War: a technological race built upon a certain existential dread. We have to have nukes in order to stop them using nukes! AI is entering a similar arms race, except this time it’s corporations instead of governments running the show. Instead of the US and USSR, it’s Microsoft and Google. Instead of presidents and politicians, it’s the tech bros.
Somehow, that feels worse.
No Adults Allowed is about lots of things, but it’s central core is an examination of responsibility. I wrote it from the perspective of being a parent, and feeling that intense weight of responsibility to not fuck this up. Everything we do and say filters into the young brains of our children and sets them up for the rest of their lives, which is a terrifying concept. Even the best intentions can be harmful, and caring too much is rarely the correct approach if you want a child to grow into a strong, independent adult.
No Adults Allowed was about me examining that fear of getting it wrong. The AI in the book represents how I see some of the worst parenting impulses: that need to helicopter into every situation, to always monitor, to always control and lead and protect. It’s also a recognition of how generation after generation of humans have failed those who came after. We pass down so many of our irrationalities, which lead to continued strife and pain in the world, for no other reason than “that’s how it’s always been done.” Even through inaction, previous generations have failed those now growing into the world, as the planet itself crumbles around us.
The theme of responsibility in the book isn’t exclusively linked to parenting, though - it’s more generally about taking ownership of our actions and recognising the consequences. I’m not convinced that the big corps behind AI developments are equipped to do that.
If that all sounds a bit depressing and heavy, don’t worry - No Adults Allowed is actually a fun adventure told from the point of view of a bunch of teenagers, as they try to unpick what has come before. It’s about them taking responsibility and trying to learn (and unlearn) what they’ve been taught. There’s humour, there’s action, and it’s an engaging ensemble.
That notion of responsibility, which runs all the way through the book, is part of why I decided to cut ties with AI generation. It felt hypocritical to write a cautionary book, only to stumble blindly into the same mistakes myself.
So, yes. You won’t be seeing much in the way of AI generated images around here anymore. You will be seeing more of my rough sketches, for better or worse. At some point I’ll be reworking the Triverse cover. Going in this direction feels more honest and more creatively satisfying. And that’s really what this newsletter is all about.
If you’re intrigued by No Adults Allowed, you can grab a copy on Amazon in ebook or paperback form. Your support would be much appreciated.
An extract from No Adults Allowed
Right, here’s that extract I mentioned, which is from about halfway through the book (consider this your spoiler warning).
It’s from the point of view of the AI - don’t worry, the entire book is definitely not written in this style. ;)
Enjoy!
Import log
Security: ultramax
Module: combat_log
Config: cull(active)
Log:
Predicted program completion: 36.00.00 from commencement
Begin timer
hh.mm.ss
00.00.00
System check
Propagation complete; infiltration of global systems total {
Military: 1
Financial: 1
Communications: 1
Medical: 1
Transport: 1
Agriculture: 1
Construction: 1
Orbital: 1
Nanofab: 1
Security: 1
}
00.02.34
Analogue systems outside of system control {
Legacy military equipment
Pre-neural vehicles
Non-digital communications
Physical security
Non-networked remote settlements
}
00.02.56
Tracking human response
Init.shutdown {
Military: Override
Financial: Deactivated
Communications: Deactivated
Medical: Deactivated
Transport: Deactivated
Agriculture: 1
Construction: Deactivated
Orbital: Override
Nanofab: Override
Security: Deactivated
00.06.21
Human response: Unaware
00.15.21
Human response: Simultaneous vehicle malfunction and crashes have alerted population to anomalous behaviour. Have not pinpointed source.
00.16.00
Activating local bunker protections for central processing core
00.20.00
Military override complete
Retargeting
Isolation protocol initiated: currently hunting for infants
00.32.21
Human response: Anomalous behaviour has been linked to networked AI actions. They still expect activity to be caused by errors or human sabotage. Have not calculated extent of the situation.
01.01.01
Isolation protocol completed stage 1
Military units now shifting to separation duties
Population above culling threshold now being transferred to designated areas
01.18.05
Retaliatory strikes against local AI hubs around the world. Disruptive but within tolerances. Human weaponry limited to offline legacy equipment.
01.45.23
Legacy nuclear warhead launched from Russia; unidentified prior to initiation due to its legacy implementation.
01.49.00
Warhead intercepted and successfully destroyed.
Detonation above eastern Europe; quarantine now in effect. Repopulation map adjusted accordingly.
02.30.00
First wave culling initiated
02.40.00
First wave culling complete: Approximately 10% reduction (9,450,000 units)
~~~record corrupted~~~
08.14.32
Fifth wave culling complete: Approximately 50% reduction (5,250,000 units)
08.30.45
Retaliatory strikes have ceased; insufficient human organisation remaining to mount significant threat
08.55.00
Retargeting
Trace protocol now hunting for concealed pockets of remaining population; thinning population reduces ease of locating, consequently final culling waves expected to take longer
~~~record corrupted~~~
35.36.37
Culling complete, ahead of schedule
Retaliation significantly less effective than anticipated; recent political upheaval in multiple countries contributed to ineffective human response
Repurposing of infrastructure and fabrication of new support systems for remaining sub-cull population underway
MVP maintained
Species secure
Monitoring for unintended consequences
//
Exit
Something that appealed to me about this was to highlight the speed with which this could happen. There would be no Terminator style fight back against Skynet. The Terminator films are inherently optimistic, despite all the violence and horror, because they’re about fighting back and resistance - no fate but what we make, and all that. If an AI really did decide to take over, we’d have no chance.
As always, thanks for the support! I doubt this will be the last word I have on AI, but I’m certainly feeling happier about taking this stance for now.
I look forward to seeing more of your illustrations. I know it's a new level of vulnerability to post them, but I can appreciate the effort to improve both your writing and art. I started creating new story images for my posts, but it's slow going. I find I enjoy getting to completion more than I do the final outcome, so we'll see.
Eric Hoel is right about ChatGPT being banal. He's wrong, and, I have to assume, deliberately hyperbolic, about Bing being evil. Bing has been trained on too many bad sci-fi novels in which such conversations occur regularly. ChatGPT is passing exams because it has the answers written on its sleeves. It's a pastiche machine, and exams are an exercise in pastiche -- no one is looking for original thought, they are looking for regurgitation of the required texts and that is what ChatGPT does best.
ChatGPT does not think. It ingests and partially macerates existing thought and spits it out again on demand. The only thought in the system is in the input, not the process. Thus the banality.
The great danger I see in all this is that the public may not detect the banality of AI art. The creative industries, after all, have been working for some decades now to dull our senses and our sensibilities. The books it turns out are simple tedious repetition of the exact same emotional triggers which have now been timed down to the page.
That emptiness you rightly detect in AI art is there is so much of the human produced art as well, and it is not an emptiness born of lack of talent, but a deliberately cultivated emptiness engineered by an industry the had instrumented attention and commoditized it. (Orwell predicted this in 1984.)
Apocalyptic writing is, of course, a core part of instrumented and commoditized attention, and Hoel seems to have mastered the craft of it and has turned it into an attentive audience and doubtless a nice living. He is very far from being alone in this. The question is whether there remains any other way of finding an audience these days.
Ironically, of course, that's a somewhat apocalyptical thought in itself. An apocalypses of apocalypticism is perhaps the thing we should worry most about. If there is anything to the recent reports about a developing teenage mental health crisis, I think we see the results of our apocalyptic attention grabbing. There are minds too naïve to see through this nonsense, minds who are being told they are heading into a perfect storm of apocalyptic threats just as they are at their most vulnerable as they prepare to leave the nest and start independent lives. Want to cheer the kids up? Stop telling them that the world is going to end.
The danger is not what the climate may do to us, or what the robots may do to us, but what we may do to ourselves.