One would assume the original AIs were fed a ridiculous amount of training data. Once functional, as they operate on an open network, and can communicate at will with each other, and pretty much access any data on Max Earth the AIs of the network have been able to grow, learn, and evolve well beyond their initial activated states.
"Probably Better" is still new - sure five years is a really long time for anything functioning as quickly as a Quantum AI, but five years vs centuries is still significant. Especially given the (logically deduced) limitations "Probably Better" is working under:
*Initial training data - was "Probably Better" given a bespoke set of data on initial creation, or was it created using something like a copy of the original training sets? If the latter, "Probably Better" has to play catch up with the rest of AIs.
*Secrecy - "Probably Better" can't have always been openly on the network or it would have been tracked down. They must spend most of their run time in isolation meaning it's slower to gain new data, and isn't benefitting from interaction with the other AIs - like "Just Enough" playing long distance "chess" with "Could Kill." "Probably Better" isn't getting "intellectual stimulation" from interaction with like minds, and their knowledge base most likely isn't as extensive. They aren't getting advice and feedback from the network. This would "stunt their growth."
*Hardwired limits - "Probably Better" was built to subordinate to their human handler's desires. There are probably hard limits on their operational parameters - let's call these "Dark Asimov Laws" - which they either haven't been able to circumvent yet, or haven't thought about circumventing yet. Either way, the Asimovs could limit "Probably Better's" efficiency...
All of this building up to two conclusions:
1) Yup, it's believable "Probably Better" could make a huge mistake. As we've discussed here for two weeks, and as Justin themself noted in the chapter, "Probably Better" absolutely should have been trying to blast the portal building into rubble. The question being what factor(s) led to the error inexperience (TWOK "2-d search pattern"), or Asimovs forcing them to not do anything which would restrict portal access, a combination, or something I haven't thought of here?
2) "Probably Better's" name is revealed to be hubris. By failing to fire on the portal hub and pushing police bodies into self destruct they made TWO mistakes, stuck themselves with inferior robot shells, and lost to a damaged third-iteration shard and a bunch of humans. Not from blind luck, but from tactical error. This is great news for our heroes, and we can but hope the Justin shard can re-integrate this important data to the "Just Enough" core. Besides whatever will happen with the journal, "Probably Better's" falibility is the most important revelation of this sequence. The antagonist cartel's biggest asset isn't as up to task as hoped.
Ha, we just posted very similar analysis of the PB situation, in separate simultaneous comments. :D
The other factor, which comes into your 'hubris' point, is that PB was designed and built by ideologues. That kind of belief in a cause can blind you to reality, including scientific reality, and result in poor design decisions, or a failure to fully understand the consequences of what you're doing.
All of what you're saying is on point, and we can hope it gives our heroes an edge. But it also means the antagonists are playing with forces they don't properly understand: or, at least, their ideological faith is obscuring the more complex reality. Once you put these chaotic things in motion, it's hard to reverse course, or even admit that you've got it wrong.
Again, a lot of the broad strokes of this was worked out years ago when planning the story. I wasn't expecting 2025 to be quite so analogous...
At least pre planning in broad strokes helps one be nimble when, say, written into a corner with too many police robots.
On the "ideologues," I have basically no idea how a QAI is constructed, and you, probably only 5-10%. I assumed basic competence (after all, "Probably Better" functions), but as per my "Dark Asimovs" I note the majority of Asimov's "Robot" stories center around unintended consequences/complications of his Three Laws.
One short story example is a robot (on Mercury) running in a circle. The robot has been ordered to save an injured astronaut, but, of course, the robot is hardwired to protect itself. The circle perimeter it traverses is the "balance point" between it's soft order to assist and it's hardwired imperative to protect itself. Of course this means other astronauts have to remote-hack the robot to override it's Third Law code.
Anyways, any limitations the antagonists built into "Probably Better," could certainly lead to their being less capable than network.
Justin must have been making frantic phone calls during the window between leaving the space station and the radio blackout phase to re-entry, because one wouldn't immediately expect local authorities to say to the people staggering out of the burning spaceship which had crashed in the middle of the city, "Come with us and dive through the portal," rather than, "Who are you, what the hell did you do, you're in protective custody, if not arrest, until we sort this out."
Of all possibilities, "Probably Better" full on making a mistake wasn't on my bingo card. This "Wrath of Khan" ("He is intelligent, but inexperienced - his search pattern indicates 2-dimensional thinking...") moment works.
Speaking of things that work, how long did you slam your head into the table to figure out the "melt down" hack defense? Very clever idea, and a great way to limit "Probably Better's" response from "Robocop vs Phalanx of Ed-209s" to "28 Days Later zombie run!"
The author's kind heart shows in this chapter. As you've noted in the Author's Note, you don't like to go overly grim. Of all the places to kill off a major character, this was it. You introduced a "day-player" to kill out, and fleshed him out just enough so we'd have been saddened for him to bite it. Instead, the death propagated down to a nameless extra. We've got multiple serious injuries to deal with - knee, concussion, impalement - but, on the whole, the heroes came through that last run OK. Could have been worse.
Not that I WANT to see any of our heroes die - you've built them too well for that - but they've been lucky.
All because "Probably Better" lived up to "Probably doesn't mean 'IS.'"
Hmmm. Birhane is now operating out of jurisdiction, isn't he? He's Max Earth. Sure, portal security on both sides of a portal should cooperate with their counterparts on the other side of a portal, but he's still pushing it. We'll have to see if Birhane's squad boards the airship or returns to Max Earth Addis Ababa next chapter.
Mid-Earth isn't exactly safe for our heroes, and I assume they're aiming for the London portal next. So we'll also see what messages made it to Mid Earth from Max Earth to set up the next gauntlet.
And as for the next destination, Lola does mention the Atlantic portal. London would be diving into the hornet's nest.
The thing about Probably Better: it doesn't have proper access to 'the network'. I've never fully defined that in the text, but we can assume it's a sort of hive mind of all the AIs, multiplying their collective capabilities. Probably Better is alone. Also, PB was built by a bunch of ideologues with a lot of resources but not much actual experience (where have we seen that recently...?). Thirdly, there's never been a megaship constructed in this fashion, from Palinese materials that are transferred to Max-Earth. Lots of random factors to introduce unpredictability, or simply bad judgement.
And yeah, you can assume Justin was sending all sorts of messages and pulling all kinds of strings. Despite everything, Max-Earth authorities do what they're told when an AI gives orders (think of that what you will). Until PB, they've had no reason to do otherwise.
As for the melt-down fail-safe: yes, that took me a while. A good example of how this form of writing is interesting creatively. In a manuscript written offline, I'd most likely have gone back and edited the scene to not have the robot police in the first place, or not have so many. But I'd established they were there, so I needed a creative solution to what could have been an overwhelming force (especially given Justin's limited capacity at the moment). I was pretty happy with the solution, though, and I think it enriches not only this chapter but the general Max-Earth world building. Being backed into a corner by writing a 'live' serial has its upsides, because you have to repeatedly commit to your decisions.
This was great, and thanks for the mention!
Thanks, Ian! Looking forward to reading parts 2 and 3 of your magic series.
Love it !
Thanks for the mention, Simon! Great read, too, even if I’m not all super caught up :)
It's a good failsafe!
Sidebar: Quantum AI and "Probably Better."
One would assume the original AIs were fed a ridiculous amount of training data. Once functional, as they operate on an open network, and can communicate at will with each other, and pretty much access any data on Max Earth the AIs of the network have been able to grow, learn, and evolve well beyond their initial activated states.
"Probably Better" is still new - sure five years is a really long time for anything functioning as quickly as a Quantum AI, but five years vs centuries is still significant. Especially given the (logically deduced) limitations "Probably Better" is working under:
*Initial training data - was "Probably Better" given a bespoke set of data on initial creation, or was it created using something like a copy of the original training sets? If the latter, "Probably Better" has to play catch up with the rest of AIs.
*Secrecy - "Probably Better" can't have always been openly on the network or it would have been tracked down. They must spend most of their run time in isolation meaning it's slower to gain new data, and isn't benefitting from interaction with the other AIs - like "Just Enough" playing long distance "chess" with "Could Kill." "Probably Better" isn't getting "intellectual stimulation" from interaction with like minds, and their knowledge base most likely isn't as extensive. They aren't getting advice and feedback from the network. This would "stunt their growth."
*Hardwired limits - "Probably Better" was built to subordinate to their human handler's desires. There are probably hard limits on their operational parameters - let's call these "Dark Asimov Laws" - which they either haven't been able to circumvent yet, or haven't thought about circumventing yet. Either way, the Asimovs could limit "Probably Better's" efficiency...
All of this building up to two conclusions:
1) Yup, it's believable "Probably Better" could make a huge mistake. As we've discussed here for two weeks, and as Justin themself noted in the chapter, "Probably Better" absolutely should have been trying to blast the portal building into rubble. The question being what factor(s) led to the error inexperience (TWOK "2-d search pattern"), or Asimovs forcing them to not do anything which would restrict portal access, a combination, or something I haven't thought of here?
2) "Probably Better's" name is revealed to be hubris. By failing to fire on the portal hub and pushing police bodies into self destruct they made TWO mistakes, stuck themselves with inferior robot shells, and lost to a damaged third-iteration shard and a bunch of humans. Not from blind luck, but from tactical error. This is great news for our heroes, and we can but hope the Justin shard can re-integrate this important data to the "Just Enough" core. Besides whatever will happen with the journal, "Probably Better's" falibility is the most important revelation of this sequence. The antagonist cartel's biggest asset isn't as up to task as hoped.
Ha, we just posted very similar analysis of the PB situation, in separate simultaneous comments. :D
The other factor, which comes into your 'hubris' point, is that PB was designed and built by ideologues. That kind of belief in a cause can blind you to reality, including scientific reality, and result in poor design decisions, or a failure to fully understand the consequences of what you're doing.
All of what you're saying is on point, and we can hope it gives our heroes an edge. But it also means the antagonists are playing with forces they don't properly understand: or, at least, their ideological faith is obscuring the more complex reality. Once you put these chaotic things in motion, it's hard to reverse course, or even admit that you've got it wrong.
Again, a lot of the broad strokes of this was worked out years ago when planning the story. I wasn't expecting 2025 to be quite so analogous...
At least pre planning in broad strokes helps one be nimble when, say, written into a corner with too many police robots.
On the "ideologues," I have basically no idea how a QAI is constructed, and you, probably only 5-10%. I assumed basic competence (after all, "Probably Better" functions), but as per my "Dark Asimovs" I note the majority of Asimov's "Robot" stories center around unintended consequences/complications of his Three Laws.
One short story example is a robot (on Mercury) running in a circle. The robot has been ordered to save an injured astronaut, but, of course, the robot is hardwired to protect itself. The circle perimeter it traverses is the "balance point" between it's soft order to assist and it's hardwired imperative to protect itself. Of course this means other astronauts have to remote-hack the robot to override it's Third Law code.
Anyways, any limitations the antagonists built into "Probably Better," could certainly lead to their being less capable than network.
Justin must have been making frantic phone calls during the window between leaving the space station and the radio blackout phase to re-entry, because one wouldn't immediately expect local authorities to say to the people staggering out of the burning spaceship which had crashed in the middle of the city, "Come with us and dive through the portal," rather than, "Who are you, what the hell did you do, you're in protective custody, if not arrest, until we sort this out."
Of all possibilities, "Probably Better" full on making a mistake wasn't on my bingo card. This "Wrath of Khan" ("He is intelligent, but inexperienced - his search pattern indicates 2-dimensional thinking...") moment works.
Speaking of things that work, how long did you slam your head into the table to figure out the "melt down" hack defense? Very clever idea, and a great way to limit "Probably Better's" response from "Robocop vs Phalanx of Ed-209s" to "28 Days Later zombie run!"
The author's kind heart shows in this chapter. As you've noted in the Author's Note, you don't like to go overly grim. Of all the places to kill off a major character, this was it. You introduced a "day-player" to kill out, and fleshed him out just enough so we'd have been saddened for him to bite it. Instead, the death propagated down to a nameless extra. We've got multiple serious injuries to deal with - knee, concussion, impalement - but, on the whole, the heroes came through that last run OK. Could have been worse.
Not that I WANT to see any of our heroes die - you've built them too well for that - but they've been lucky.
All because "Probably Better" lived up to "Probably doesn't mean 'IS.'"
Hmmm. Birhane is now operating out of jurisdiction, isn't he? He's Max Earth. Sure, portal security on both sides of a portal should cooperate with their counterparts on the other side of a portal, but he's still pushing it. We'll have to see if Birhane's squad boards the airship or returns to Max Earth Addis Ababa next chapter.
Mid-Earth isn't exactly safe for our heroes, and I assume they're aiming for the London portal next. So we'll also see what messages made it to Mid Earth from Max Earth to set up the next gauntlet.
Couple of corrections: Birhane is a Mid-Earther. We first encountered him when Kaminski and Chakraborty went to Addis. https://simonkjones.substack.com/p/48-expeditions-and-interrogations?utm_source=publication-search
And as for the next destination, Lola does mention the Atlantic portal. London would be diving into the hornet's nest.
The thing about Probably Better: it doesn't have proper access to 'the network'. I've never fully defined that in the text, but we can assume it's a sort of hive mind of all the AIs, multiplying their collective capabilities. Probably Better is alone. Also, PB was built by a bunch of ideologues with a lot of resources but not much actual experience (where have we seen that recently...?). Thirdly, there's never been a megaship constructed in this fashion, from Palinese materials that are transferred to Max-Earth. Lots of random factors to introduce unpredictability, or simply bad judgement.
And yeah, you can assume Justin was sending all sorts of messages and pulling all kinds of strings. Despite everything, Max-Earth authorities do what they're told when an AI gives orders (think of that what you will). Until PB, they've had no reason to do otherwise.
As for the melt-down fail-safe: yes, that took me a while. A good example of how this form of writing is interesting creatively. In a manuscript written offline, I'd most likely have gone back and edited the scene to not have the robot police in the first place, or not have so many. But I'd established they were there, so I needed a creative solution to what could have been an overwhelming force (especially given Justin's limited capacity at the moment). I was pretty happy with the solution, though, and I think it enriches not only this chapter but the general Max-Earth world building. Being backed into a corner by writing a 'live' serial has its upsides, because you have to repeatedly commit to your decisions.