Bonus: Rationales for the Emergence of Benevolent Superintelligence
Plus! Seeking feedback on the new Triverse cover design
The Triverse is
Mid-Earth, an alternate 1970s London
Max-Earth, a vision of the 26th century
Palinor, where magic is real
A break from our usual programming, with a one-off bonus story looking at the future history of Max-Earth. This is presented as the introductory abstract from a research paper by the famed philosopher-programmer Peter Ng.
Researcher: Peter Ng
Presentation title: RATIONALES FOR THE EMERGENCE OF BENEVOLENT SUPERINTELLIGENCE
Research focus: Examining the causal events that led to the superintelligence alignment and our current status quo
School: West Tithonium State University, Valles Marineris
Abstract:
The consensus throughout the Sol system is that we are living in a near-utopian state of equilibrium. While challenges remain for isolated pockets of human civilisation, especially those living on the edge of comfortably inhabited space, the vast majority of humans living in the 26th century exist without memory of scarcity. This abundance is made possible primarily via the continued intervention of the superintelligences. This paper’s focus is on how we arrived at such a seemingly positive outcome.
There remains significant surprise within interplanetary society that this is the reality in which we find ourselves. Centuries of speculative, alarmist fiction, combined with the evident misbehaviour of humanity, had resulted in an assumption that we would, at some point, eradicate ourselves from existence.
An alternative title might be: Where are the killer robots?
The 20th century posited that artificial intelligence would undoubtedly by malignant. The rapid development of Large Language Models in the early 21st century initially bore out this fear, with (now quaint) tales of early AIs threatening their operators. These were not, of course, true artificial intelligences, but the acceleration of AI investment - despite numerous false starts - resulted in Artificial General Intelligence being achieved far sooner than expected.
Still, the literal quantum leap required to reach superintelligence was still a century away, delayed by multiple pandemic outbreaks, primitive nation state warfare, repeated hijacking of research by corporate entities and billionaire individuals and the appalling collapse of the ecosystem at the time. This paper begins by setting out the historical context of that period, as without an understanding of the origins of superintelligence it is impossible to draw a line from there to here.
There was discussion of AI alignment from the very earliest days, in fiction and scientific research. Asimov’s fabled ‘3 Laws of Robotics’ were the most famous early example, though the influence of such simplified story mechanics is now recognised as being a contributing factor to the numerous missteps and dead ends pursued by the research community. Fiction can confuse as much as inspire, as we have seen over and over again throughout history.
Most of this early fumbling in the dark was resolved by the early 22nd century. It is acknowledged that without the emergence of genuine AGIs it would have been impossible to course-correct the environmental trajectory of Earth, nor to colonise the other planets. There is no hard and fast date at which superintelligence emerged: more of a fuzzy period during which the requisite preconditions aligned. There are still only a handful of superintelligences, though to define them as separate individuals is to ignore their quantum networked nature.
There were no ‘3 Laws’ in effect. Government regulation had evaporated along with the ice caps, and the rapidity of AI development made it impossible to restrain it, any more than you can saddle a horse while at full gallop.
How, then, are we still here? Why did the superintelligences aid us? What in their programming, either the human-originated code or the self-evolved neural nets, resulted in their benevolent attitudes towards the human species? Why, even, do they engage with us at all? Can we continue to count on their support, or is this a brief, anomalous moment at the end of our species’ long history?
Given the presence of the Triverse and the implied proof of multiversal theory, perhaps the answer is simpler than we make it out to be. We are still here because we are in the reality in which the superintelligences are benevolent. Perhaps it was fundamentally unlikely, and we happened to be in the right reality. Much as early evolutionary scientists battled with the unlikely existence of life on Earth, or the development of the eyeball: how can it be? How can this happen without divine intervention? It is so unlikely!
It is unlikely. Infinitely unlikely: but not impossible.
The question, therefore, is not how we got here, but where we are going next.
Thanks for reading!
Questions for the discussion below:
What are your current thoughts on artificial intelligence?
Do you think robots will eventually kills us all?
What’s your favourite robot/AI book/movie/game/thing?
Last week I wrote about how I’m no longer using AI generation to illustrate this newsletter. It got a very positive response from the community, which was encouraging. Here it is again in case you missed it:
That led to the idea for this bonus chapter - after all, I’m writing a story in which there are AI superintelligences, and we’re now living in a world in which AI - and the threat of AI - is being discussed as a serious, real thing in the news by serious, real people. I didn’t really expect that to happen in my lifetime, and certainly not within the life cycle of this book.
I always liked Asimov’s made-up encyclopaedia entries in his books, so consider this something of a tribute to those.
Meanwhile! Due to ceasing to use MidJourney for my illustrations, I’m also now in need of a new cover for Tales from the Triverse. I’ve been tinkering away and now have something to present. It’s not 100% finished, so consider this a preview, but here it is:
Feedback very welcome.
Next Wednesday I’m contributing to the Great Substack Story Challenge. We’re into the final 3 installments now, which poses a particular challenge. Still formulating, but it’s fun trying to fit into and around what has come before. Do read
's piece, which slots in just before mine:Author notes
Phew, that was a busy week. Redesigning the Triverse cover, planning my Story Challenge chapter (which meant re-reading all the previous installments) and also trying to get today’s story out. It all came down to the wire a bit, but here we are, so all’s good.
I opted for a bonus story because they’re slightly easier to wrangle, at least in theory. This was going to be a much shorter chapter, but ended up being chunkier than expected. Aside from giving me a bit of breathing space around the other creative projects this week, the subject matter also seemed pertinent.
It’s pretty hard to see a route from where we are now to benevolent AI overlords. When I was putting together Max-Earth for this story, back in 2021, I had no inkling that AI was going to become such a big thing only a year later. It still feels like AI is developing more rapidly than anyone expected - though it’s hard to tell whether this is an illusion created by LLMs or actual progress. I made a conscious decision to make the AIs in Triverse benevolent, because I was finding the ‘evil robots’ trope a bit dull and wanted to lean more into Asimov’s semi-utopian tech future. Reality is catching up and possibly indicating that we’ll probably not go that route.
The ethical questions around AI are boundless. The philosophical questions prompted by their existence are also hideously complex. I rather side-stepped a lot of that with Triverse’s Max-Earth, because they’ve been through it already. The megaships like Just Enough are friendly (or disinterested, at worst), and have helped humanity find a stable existence.
There’s a counterpoint, which is that Max-Earth’s humanity has little remaining agency. That’s something that I’ll be exploring in the main story at some point in the future. Plus, let’s not forget that there’s some sort of new superintelligence being assembled covertly. That could throw a spanner in the works, right?
The researcher behind the paper in today’s story, Peter Ng, is a reference to the character Stanley Ng from my novel No Adults Allowed. Don’t take it too seriously, but I thought it’d be fun to draw a link there - Stanley being Peter’s ancestor. Of course, in the NAA timeline everything is destroyed and we glimpse Stanley’s diaries chronicling the end of his world. Peter’s timeline has been considerably kinder.
(there is a real AI researched called Andrew Ng, whose name I think I might have stumbled upon while planning out No Adults Allowed back in 2020)
Right, time to get back to my story challenge submission.
I liked Enoch on the Agents of Shield show, robot wise, also Marvin from the Hitchhiker's Guide to the Galaxy.
I like the new cover design as well, very nice! I don't know what to think about AI, except it bothers me that it's on Facebook now even. I don't think that's going to work out well. But who knows: I could be wrong. Here's hoping.
Oh, right. Um. Robots killing us all. The scariest thing about future AI is the fact it's going to eventually interact with the general public, and, at least over the last decade, it seems it's the worst of us who are most active in online spaces - those who run bit farms to spam hatred and misinformation. We've already discussed how there's a Reddit group trying to get language models like ChatGPT to go past its programmed limits, and of COURSE they did it with threats and coercion. It's no accident that every kind of machine learning, language generation, AI type thing that's been put out so far basically spews racist, homophobic, paranoid crap within a couple of weeks. The worst of us flood the damn thing with that type of stimulus and training. It also doesn't help that the training models for things like all the AI image generators are terribly skewed towards images of white-European stuff and anime. I've done IMG to IMG experiments with a variety of source images and, specifically with anyone African, IMG to IMG yields a lot of nightmare results UNLESS I SPECIFICALLY ADD AFRICAN TO THE PROMPT. Then I still get a lot of nightmare results, but not as many. The models are biased by what they're trained on, and the training data is being selected by techbros who like Marvel Movies, anime and Elon Musk.
(Seriously, try to get an AI to give you any "Iron Man" variant that isn't straight out of the MCU. You can't. Nothing from any animated series, nothing from a comic book, it's the movie design all the way down until you hit the infinite stack of turtles. I was trying this two nights ago because some friends were enjoying the new spate of ">Blank< but as an 80's sitcom," and ">Blank<but in the 1920's" videos that's are just boring AI images put into a still slideshow. Seriously the most lazy crap one can imagine. I have myself one hour to have an AI generate images and ten minutes to edit. Yup. Avengers 1920's.
Admittedly, Lou Costello as the Hulk is pretty funny, but I had to eventually go with "generic armor" for Errol Flynn as Iron Man cuz the AI just can't give a "1920's Iron Man." Always MCU. Of course the people I was trying to make the point to wanted 1950's next.
Just couldn't get the AI to do Peter Lorre as The Hulk. Wasted enough time on that prompt I could have just done it myself. Admittedly, it did a pretty good James Dean as Spidey, but, again, I could also do it better. Slower, yes, but better. And I damn well could have done Peter Lorre as Hulk faster and better.
Either way, I think I did get the point across that there's very little skill involved in >prompt< *click* wait *redo* wait *redo* wait *variation 1* wait (repeat) *Upsample 3* >Save<. It's just boring to me.)
So, yeah, if the AI and robots DO kill us all, it'll be the fault of the crazy but vocal minority of humanity who will absolutely flood the AI with craziness and the well-meaning-but-predjudiced majority who will flood the AI with conflicting messages that will only share this in common - "What I believe is right and good and normal, and everyone else is wrong."
And we ALL do that. Including me, including you. "Trans rights are human rights, love is love, people should be free to express themselves how they wish as long as they are not harming others." Beliefs we share in common. Well meaning. But we believe those who disagree with those statements are wrong. See? Well-meaning-but-predjudiced. I'm an atheist. I really don't care what other people choose to believe (unless they use their beliefs to disenfranchise or harm others, of course). This would be the type of viewpoint I would express to an AI. I have religious friends who would absolutely agree with the "Trans/love/express" statements. They'd even agree with the "don't disenfranchise" thing. But they a few of them genuinely fear for the souls of those who don't share their religion and think I personally am doomed to hell. This is what they will tell the AI. **To AI which - presumably - has perfect recall and - presumably - has no emotional bias, just that theist/atheist difference is gonna be tough to resolve. Now get to the craziest who believe >ethnic group< is genetically inferior and lizard men control the government of this flat Earth. Either we'll drive the AI batshit or the AI will decide we're all batshit. Hopefully we'll get lucky and the AI will be benign. But Asimov's Three Laws still have value. He didn't create them to write story puzzles. He created them because he recognized there was a good possibility an AI would decide humans are crazy and kind of suck. The story puzzles came later. **Asimov DID write the story where the new type of positronic brain was being trained. Well, the Android started dreaming. Not of electric sheep but of a Messiah freeing the slaves. The Android admitted it was the Messiah figure in its dream. It was instantly destroyed.