You can see in my response to someone else's comment how my quick ChatGPT 4 test of a few minutes ago went. Spoiler - the LLM is still incompetent, and I wouldn't trust it to tell me a synonym for "incompetent."
But it's programmed to be very apologetic about it's incompetence and occasionally give disclaimers about its incompetence.
I still find it very peculiar that these companies have rushed out beta (pre-beta?) software to the mass public, complete with caveats like “answers might be wrong”. It’s such an odd thing to do, because it undermines faith in your product right from the start.
Excellent article and some very interesting points. My favourite author is now a best friend and who pointed me to you. It's not just the stories, but the message behind them, which is why we connected so well. Other authors I follow closely I also chat with occasionally.
Those that just pump out the same stuff at a rapid rate, I got bored with quickly, and I expect we will just see more of that from AI.
As an ex-programmer I don't see how AI can truly learn for itself and invent anything new and creative. I also don't think that is actually the point, independent thought is not what they want, control is and not having to continually pay you for anything.
Amazon and Facebook don't create anything, they make their money off what others create!
I've been underwhelmed by the AI fiction writing I've read. It's bland, has no authorial point of view or voice, and has no soul or narrative drive. AI might be OK for bland technical writing, assuming it doesn't make stuff up, but for creative work it's not ready yet. I have no personal interest in using AI for my writing. I'd rather own it entirely than make bland copies of it influenced by who knows.
It can work fine for basic analytical writing, or summarising. In both cases it’s taking a bunch of human-made resources and simply presenting them in a new, useful way. I don’t really want that sort of thing to be ‘creative’.
The lack of authorial point of view is a bit of a blocker for creative writing. That’s where the music video example was interesting, as they were using an artist’s style as the ‘point of view’, then extrapolating from there. Perhaps that concept works better with visuals than text?
You talk about the importance of process, and that's where I find "linguistic emulators" (lol, there's got to be better terms than the generic AI to cover all of what we are seeing) like ChatGPT and Claude incredibly useful as a writer. I've played with image generators but the novelty wore off quickly, so I can only speak with any authotity on chatbots/linguistic emulators. But they are amazing -- when used properly.
When used in a purely generative fashion -- prompt and then cut/paste the result -- they are indeed darn weak. The novelty wears off fast, and you quickly realize that as writers they are hacks.
But switch to a dialogue mode and they are amazing.
Having them act as a tutor, for example, can help you learn a subject far more thoroughly and quickly than just reading texts. Or, in the use case I've found especially helpful -- use them as a tutor while working through a particularly difficult text.
In my writing, I use them as an all around editor, research assistant, idea-bouncer. I am a solitary writer with no "support staff." So I have no humans to preview my work and suggest ways to improve it; but Claude and ChatGPT perform that role very well -- How's my overall structure? Am I missing anything important in my argument? What would an expert in X, Y or Z say?
Or when I'm working through early ideas for a more complex essay, its incredibly useful as a brainstorming partner. And for research, of course: yes I have to double check it, but double-checking is alot faster than doing the initial research (and nowadays they are rarely wrong). And while some simple questions I have can still be easily checked via wikipedia or whatever, the most interesting ones usually are much better answered by a chatbot.
Anyway, long story short: don't dismiss the value of Claude or ChatGPT (I find Claude to be the "better" writer, fwiw) in helping one's own writing just because you don't plan on directly using its output. The best use for it is to help you refine your own ideas and your own writing.
It's during the process itself that chatbots are best used, not to produce a final object.
I've heard lots of writers say how useful LLMs are for research and idea bouncing. Doesn't really fit with my approach, but I can certainly see how it could be useful.
In a work, non-creative context, I've found Gemini to be occasionally useful: specifically, in the day job I have to deal with various Google products (e.g. Google Ads) and the documentation is generally a disaster zone: contradictory, out of date, and generally nonsensical. That's an example where Gemini was able to give a straight answer to a weird, niche question about a very specific feature, saving me having to comb through a ton of support docs manually. That would have been intensely dull.
For my creative writing, though, I really rather enjoy the much slower pace of trawling through different sources to find ideas and answers to research. Slowness is an important part of my creative process, actually: it's those gaps in-between when I have many of my best ideas.
The problem is, of course, has the LLM been correctly trained on whatever topics yours using it to bounce ideas off.
As a niche example, on a VFX forum a user talked about how they were bouncing ideas through an LLM to work on a tutorial, and they posted the outline he'd eventually used the LLM to generate.
Once I corrected all the basic errors the LLM got wrong in every single step of the technique for which the tutorial was on the user did what they should have done in the first place - read the manual and ask the community for clarification...
I, on the other hand, between reading Simon's article and coming here to comment ran one of my common LLM tests in the most current version of ChatGPT - generate a 2d8 table for my preferred TTRPG.
As ChatGPT has done since 2022 it spat out a 1d20 table, then a 1d12 table, then a 2d6 table.
Until the fucking LLM can correctly parse something basic like 2-goddamn-8-sided-dice, not 2d6, 1d12 or 1d20 you will be unable to convince me an LLM is useful for anything.
Yeah, outside of that specific example I gave, I've mostly encountered AI making errors. Or third hand: people operating off bad information. There's been a slew of AI slop articles on Substack, for example, talking about how 'tags' work on Substack. Very confidently stating absolute rubbish, and the comments are full of people praising the 'author'. Even though the information is all wrong.
And that remains the central problem: you can't rely on the information's veracity, so anything beyond very, very basic queries (or weirdly niche and specific stuff that's easily verified, like with my Google Ads example) turns into a tedious minefield.
It's not a time saver if you have to cross-check every single thing it tells you.
Sure, people get things wrong, too. But in research terms, that's why things are peer reviewed. If I go talk to a physicist at a university, I can have a high level of confidence that the information will be accurate. Plus, I'll get to meet an interesting person and perhaps discover all sorts of OTHER things I wasn't expecting along the way.
LLMs are best thought of as a completely new medium, and its best uses are not to gather a few specific facts or to "produce a paper," but to engage in dialogue with (effectively) the entire corpus of human knowledge.
Yes, humans can do that too -- if you have access to an expert. But who is expert enought to help me think through the similarities and differences between Alfred North Whitehead's and Nagarjuna's metaphysics, for example, or to trace in precise detail the influence of Arabic architecture on Gothic? Those experts are incredibly rare and certainly not on hand to discuss my thoughts in real time as I work through a book I'm reading or essay I am trying to outline.
Dialogue is a completely different form of knowledge acquisition and internalizatin than just reading a single long argument in a book or essay. I'm not saying it's better in any absolute sense, but its certainly better for many things and in many ways, especially the most difficult forms of thinking (there's a reason Plato only wrote dialogues).
And its also worse for many things and in many ways. But too many people, it seems to me, are like: "this isn't the tool I thought it would be, and therefore its useless" or: "so many people are usng this tool to do stupid things, so it must be bad". It's a very unimaginative and narrow-minded approach to a new tool/medium.
Yes those are both terrible things to try and use a general purpose LLM for, it’s not that kind of tool. Especially a table like that, with the public models. The newer “reasoning” models might be better the table generation, but you’d need the plus version of chat got to check that. And an LLM trained for specific work like a software tutorial would of course be better. Foolproof, maybe not, but that doesn’t mean useless.
Your final thought is absurdly narrow-minded, btw, and I hope meant only rhetorically — if it’s not useful for what you want it’s not “useful for anything”. Lol. “If this screwdriver can’t hammer a nail, it’s not useful for anything!”
Interesting analysis. My belief is generative AI will ultimately make it impossible to earn a living from any kind of creative endeavour (it's hard enough already). When I first started, I used a couple of AI generated images on my Substack. However, I have now removed them and I try to stop my work being used to train LLM's wherever possible. If all creatives refused permission, this might at least delay the day when we all become redundant.
Possibly, but I'm not so sure. I think it'll be a race to the bottom for more formulaic stuff, and anything which does use AI. I expect there to be a lingering interest in boutique/handcrafted creative endeavours, much like there are still expensive bakeries who specialise in types of bread, even though you can buy decent bread cheaply at a supermarket. People still paint, even though you can take photos. The drive to create isn't going to go away, and there will always be humans who want stuff made by other humans. In some ways, that stuff will have more value, rather than less, in a world of AI slop.
Very interesting, and as I posted a note with a quote from you, Simon, I'm not sure where I stand. In fact, I'm not sure anyone will know where they stand over time, because this isn't going to stop.
So the question I ask myself now, is:
'How will I adapt and maintain myself as much as humanly (no pun intended) possible?'
Staying on your toes and being able to adapt (regardless of where you happen to be on this) is vital. I expect AI to disrupt some things far more than we expect, and other things far less than we expect. But it's not really possible to predict what's what at the moment. :)
Oh, hey, I can give you a great example in the realm of art.
I think it disrupts far LESS than I expected....and I found a use for it as a professional artist without it EVER doing my work, yet remain valuable...to ME.
Reference pictures.
I spend hours looking for examples of things when I don't know what something looks like. An animal, a view, a type of architecture. It's time I could spend drawing for my clients...
When I ask an AI to give me an example of a combination picture (something I would need 5+ pictures to use together as a reference), I get a picture that so far is only 35-45% accurate, BUT it gives me enough for me to draw myself.
I'm not looking for it to do my work. I have found, however, that it cuts down my research time in many instances. Not all, but hey, I've saved more than 10 hours on a single project.
I am an IT guy and love to see tech develop. And I can agree that the discourse between pro and anti-AI groups has been rather dishonest. Where one group is making it look like the world is coming down, the other is trying to make it sound way better and more useful than it really was.
But there is one aspect I see being overlooked nearly all the time. You mention it somewhat, very briefly, but the fact that AI works best when you use it as a tool.
So much discourse is about how bad or fantastic AI is. I use it extensively as well myself. But I never let it do everything for me. And the only clashes always go from one extreme to another. The group in the middle is caught up in an almost all-out war they don't even want to be a part of.
I have seen people, well-meaning, and genuinely artists, disappear from the web because someone just randomly thought something they made was AI-generated. I have seen posts of people being attacked in such a way, that they eventually choose to take their own lives.
It's really hard to see all of this happen, while I see both sides having a few genuine good points, the extreme overreaction is just absurd. The issue also looks a lot like the DJ community had a long time struggling with, and still is. It was called "the sync button". The DJ gear and software allow for the music to be analyzed, and by pressing the special button you can sync up the 2 tracks and be done. Many people started to complain when more and more hardware was released with the button. People were attacked when a live stream showed they pressed it. While some artists did speak out against the backlash, it took years for it to cool down. Generally speaking, many now realize what the "sync" button can do for a DJ and how it helps to mix in even more creative ways.
The same I see happening with AI, though on a larger scale, and also a more complex way.
To look at Gemini as an example, I recently tried to let it generate a logo for a city in my fantasy serial. The image would probably never be released, but Gemini refused to generate it when it saw it was for creative work. When asked about it, it replied that it can't generate photographic material or logo designs when it knows it will be used for any creative endeavor. This ois ut of risk for copywriter issues.
To be honest, that is also the most they can do.
I eventually got a logo out of it, but through a new chat and asking, not to create a logo, but an abstract image and a bunch of extra variables that would define the image.
Is it Google's fault that it generated the image? no. I played the system in a way that no online service would be able to put a stop to unless generative images and video services would shut down everywhere.
To wrap up, cause this is much too long.
I think using Generative AI as a tool to help, for inspiration, or for monitoring your work, can be fantastic. AI is a tool that can rapidly recognize patterns, which in the case of writing, can point out stuff you didn't even see.
In the end, the main issue, that caused all of this to become so large of an issue, is the availability and the way people people want to reach success with shortcuts. For some it works, for most, it won't. But the few that it works for, inspired the most that failed. This muddies whole industries with junk like AI-written books, online novels completely from AI, and more.
I think that with the advent of Google's Gemini now becoming more and more of an assistant and replacing the original assistant this month, Google has achieved basically what AI should be, A tool.
Chatgpt, Claude, and the others feel more like gimmicks that are easy to abuse. That's their only purpose. At least that's what it feels like to me. Granted I only used Chatgpt and a few minor other LLMs but still.
I think we are getting into the part of AI where people are starting to use it more and more as a tool to help than write everything, code everything, etc, etc, for them. Sure, many will still try and do it. But it is becoming harder to get away with it. When called out for it, more often genuinely, the consequences are harsher.
let's see where this all leads, but I think things are starting to clear up. New tech always has its starting hiccups. Something as large as AI will take a while longer before the two extremes settle down.
So, yeah, all of this.}
You can see in my response to someone else's comment how my quick ChatGPT 4 test of a few minutes ago went. Spoiler - the LLM is still incompetent, and I wouldn't trust it to tell me a synonym for "incompetent."
But it's programmed to be very apologetic about it's incompetence and occasionally give disclaimers about its incompetence.
I still find it very peculiar that these companies have rushed out beta (pre-beta?) software to the mass public, complete with caveats like “answers might be wrong”. It’s such an odd thing to do, because it undermines faith in your product right from the start.
Except, of course, it hasn't. Which is utterly inexplicable.
Excellent article and some very interesting points. My favourite author is now a best friend and who pointed me to you. It's not just the stories, but the message behind them, which is why we connected so well. Other authors I follow closely I also chat with occasionally.
Those that just pump out the same stuff at a rapid rate, I got bored with quickly, and I expect we will just see more of that from AI.
As an ex-programmer I don't see how AI can truly learn for itself and invent anything new and creative. I also don't think that is actually the point, independent thought is not what they want, control is and not having to continually pay you for anything.
Amazon and Facebook don't create anything, they make their money off what others create!
I've been underwhelmed by the AI fiction writing I've read. It's bland, has no authorial point of view or voice, and has no soul or narrative drive. AI might be OK for bland technical writing, assuming it doesn't make stuff up, but for creative work it's not ready yet. I have no personal interest in using AI for my writing. I'd rather own it entirely than make bland copies of it influenced by who knows.
It can work fine for basic analytical writing, or summarising. In both cases it’s taking a bunch of human-made resources and simply presenting them in a new, useful way. I don’t really want that sort of thing to be ‘creative’.
The lack of authorial point of view is a bit of a blocker for creative writing. That’s where the music video example was interesting, as they were using an artist’s style as the ‘point of view’, then extrapolating from there. Perhaps that concept works better with visuals than text?
You talk about the importance of process, and that's where I find "linguistic emulators" (lol, there's got to be better terms than the generic AI to cover all of what we are seeing) like ChatGPT and Claude incredibly useful as a writer. I've played with image generators but the novelty wore off quickly, so I can only speak with any authotity on chatbots/linguistic emulators. But they are amazing -- when used properly.
When used in a purely generative fashion -- prompt and then cut/paste the result -- they are indeed darn weak. The novelty wears off fast, and you quickly realize that as writers they are hacks.
But switch to a dialogue mode and they are amazing.
Having them act as a tutor, for example, can help you learn a subject far more thoroughly and quickly than just reading texts. Or, in the use case I've found especially helpful -- use them as a tutor while working through a particularly difficult text.
In my writing, I use them as an all around editor, research assistant, idea-bouncer. I am a solitary writer with no "support staff." So I have no humans to preview my work and suggest ways to improve it; but Claude and ChatGPT perform that role very well -- How's my overall structure? Am I missing anything important in my argument? What would an expert in X, Y or Z say?
Or when I'm working through early ideas for a more complex essay, its incredibly useful as a brainstorming partner. And for research, of course: yes I have to double check it, but double-checking is alot faster than doing the initial research (and nowadays they are rarely wrong). And while some simple questions I have can still be easily checked via wikipedia or whatever, the most interesting ones usually are much better answered by a chatbot.
Anyway, long story short: don't dismiss the value of Claude or ChatGPT (I find Claude to be the "better" writer, fwiw) in helping one's own writing just because you don't plan on directly using its output. The best use for it is to help you refine your own ideas and your own writing.
It's during the process itself that chatbots are best used, not to produce a final object.
I've heard lots of writers say how useful LLMs are for research and idea bouncing. Doesn't really fit with my approach, but I can certainly see how it could be useful.
In a work, non-creative context, I've found Gemini to be occasionally useful: specifically, in the day job I have to deal with various Google products (e.g. Google Ads) and the documentation is generally a disaster zone: contradictory, out of date, and generally nonsensical. That's an example where Gemini was able to give a straight answer to a weird, niche question about a very specific feature, saving me having to comb through a ton of support docs manually. That would have been intensely dull.
For my creative writing, though, I really rather enjoy the much slower pace of trawling through different sources to find ideas and answers to research. Slowness is an important part of my creative process, actually: it's those gaps in-between when I have many of my best ideas.
The problem is, of course, has the LLM been correctly trained on whatever topics yours using it to bounce ideas off.
As a niche example, on a VFX forum a user talked about how they were bouncing ideas through an LLM to work on a tutorial, and they posted the outline he'd eventually used the LLM to generate.
Once I corrected all the basic errors the LLM got wrong in every single step of the technique for which the tutorial was on the user did what they should have done in the first place - read the manual and ask the community for clarification...
I, on the other hand, between reading Simon's article and coming here to comment ran one of my common LLM tests in the most current version of ChatGPT - generate a 2d8 table for my preferred TTRPG.
As ChatGPT has done since 2022 it spat out a 1d20 table, then a 1d12 table, then a 2d6 table.
Until the fucking LLM can correctly parse something basic like 2-goddamn-8-sided-dice, not 2d6, 1d12 or 1d20 you will be unable to convince me an LLM is useful for anything.
Yeah, outside of that specific example I gave, I've mostly encountered AI making errors. Or third hand: people operating off bad information. There's been a slew of AI slop articles on Substack, for example, talking about how 'tags' work on Substack. Very confidently stating absolute rubbish, and the comments are full of people praising the 'author'. Even though the information is all wrong.
And that remains the central problem: you can't rely on the information's veracity, so anything beyond very, very basic queries (or weirdly niche and specific stuff that's easily verified, like with my Google Ads example) turns into a tedious minefield.
It's not a time saver if you have to cross-check every single thing it tells you.
Sure, people get things wrong, too. But in research terms, that's why things are peer reviewed. If I go talk to a physicist at a university, I can have a high level of confidence that the information will be accurate. Plus, I'll get to meet an interesting person and perhaps discover all sorts of OTHER things I wasn't expecting along the way.
LLMs are best thought of as a completely new medium, and its best uses are not to gather a few specific facts or to "produce a paper," but to engage in dialogue with (effectively) the entire corpus of human knowledge.
Yes, humans can do that too -- if you have access to an expert. But who is expert enought to help me think through the similarities and differences between Alfred North Whitehead's and Nagarjuna's metaphysics, for example, or to trace in precise detail the influence of Arabic architecture on Gothic? Those experts are incredibly rare and certainly not on hand to discuss my thoughts in real time as I work through a book I'm reading or essay I am trying to outline.
Dialogue is a completely different form of knowledge acquisition and internalizatin than just reading a single long argument in a book or essay. I'm not saying it's better in any absolute sense, but its certainly better for many things and in many ways, especially the most difficult forms of thinking (there's a reason Plato only wrote dialogues).
And its also worse for many things and in many ways. But too many people, it seems to me, are like: "this isn't the tool I thought it would be, and therefore its useless" or: "so many people are usng this tool to do stupid things, so it must be bad". It's a very unimaginative and narrow-minded approach to a new tool/medium.
Quite.
Yes those are both terrible things to try and use a general purpose LLM for, it’s not that kind of tool. Especially a table like that, with the public models. The newer “reasoning” models might be better the table generation, but you’d need the plus version of chat got to check that. And an LLM trained for specific work like a software tutorial would of course be better. Foolproof, maybe not, but that doesn’t mean useless.
Your final thought is absurdly narrow-minded, btw, and I hope meant only rhetorically — if it’s not useful for what you want it’s not “useful for anything”. Lol. “If this screwdriver can’t hammer a nail, it’s not useful for anything!”
Interesting analysis. My belief is generative AI will ultimately make it impossible to earn a living from any kind of creative endeavour (it's hard enough already). When I first started, I used a couple of AI generated images on my Substack. However, I have now removed them and I try to stop my work being used to train LLM's wherever possible. If all creatives refused permission, this might at least delay the day when we all become redundant.
Possibly, but I'm not so sure. I think it'll be a race to the bottom for more formulaic stuff, and anything which does use AI. I expect there to be a lingering interest in boutique/handcrafted creative endeavours, much like there are still expensive bakeries who specialise in types of bread, even though you can buy decent bread cheaply at a supermarket. People still paint, even though you can take photos. The drive to create isn't going to go away, and there will always be humans who want stuff made by other humans. In some ways, that stuff will have more value, rather than less, in a world of AI slop.
Perhaps. :)
I hope you’re right, but I fear for those of us who write genre fiction the end is nigh from a publishing and sales perspective.
I’ve never really made any useful money from my fiction writing, so I might not notice the change. :)
Very interesting, and as I posted a note with a quote from you, Simon, I'm not sure where I stand. In fact, I'm not sure anyone will know where they stand over time, because this isn't going to stop.
So the question I ask myself now, is:
'How will I adapt and maintain myself as much as humanly (no pun intended) possible?'
Staying on your toes and being able to adapt (regardless of where you happen to be on this) is vital. I expect AI to disrupt some things far more than we expect, and other things far less than we expect. But it's not really possible to predict what's what at the moment. :)
Oh, hey, I can give you a great example in the realm of art.
I think it disrupts far LESS than I expected....and I found a use for it as a professional artist without it EVER doing my work, yet remain valuable...to ME.
Reference pictures.
I spend hours looking for examples of things when I don't know what something looks like. An animal, a view, a type of architecture. It's time I could spend drawing for my clients...
When I ask an AI to give me an example of a combination picture (something I would need 5+ pictures to use together as a reference), I get a picture that so far is only 35-45% accurate, BUT it gives me enough for me to draw myself.
I'm not looking for it to do my work. I have found, however, that it cuts down my research time in many instances. Not all, but hey, I've saved more than 10 hours on a single project.
For an artist like me, that's huge.
Interesting post.
I am an IT guy and love to see tech develop. And I can agree that the discourse between pro and anti-AI groups has been rather dishonest. Where one group is making it look like the world is coming down, the other is trying to make it sound way better and more useful than it really was.
But there is one aspect I see being overlooked nearly all the time. You mention it somewhat, very briefly, but the fact that AI works best when you use it as a tool.
So much discourse is about how bad or fantastic AI is. I use it extensively as well myself. But I never let it do everything for me. And the only clashes always go from one extreme to another. The group in the middle is caught up in an almost all-out war they don't even want to be a part of.
I have seen people, well-meaning, and genuinely artists, disappear from the web because someone just randomly thought something they made was AI-generated. I have seen posts of people being attacked in such a way, that they eventually choose to take their own lives.
It's really hard to see all of this happen, while I see both sides having a few genuine good points, the extreme overreaction is just absurd. The issue also looks a lot like the DJ community had a long time struggling with, and still is. It was called "the sync button". The DJ gear and software allow for the music to be analyzed, and by pressing the special button you can sync up the 2 tracks and be done. Many people started to complain when more and more hardware was released with the button. People were attacked when a live stream showed they pressed it. While some artists did speak out against the backlash, it took years for it to cool down. Generally speaking, many now realize what the "sync" button can do for a DJ and how it helps to mix in even more creative ways.
The same I see happening with AI, though on a larger scale, and also a more complex way.
To look at Gemini as an example, I recently tried to let it generate a logo for a city in my fantasy serial. The image would probably never be released, but Gemini refused to generate it when it saw it was for creative work. When asked about it, it replied that it can't generate photographic material or logo designs when it knows it will be used for any creative endeavor. This ois ut of risk for copywriter issues.
To be honest, that is also the most they can do.
I eventually got a logo out of it, but through a new chat and asking, not to create a logo, but an abstract image and a bunch of extra variables that would define the image.
Is it Google's fault that it generated the image? no. I played the system in a way that no online service would be able to put a stop to unless generative images and video services would shut down everywhere.
To wrap up, cause this is much too long.
I think using Generative AI as a tool to help, for inspiration, or for monitoring your work, can be fantastic. AI is a tool that can rapidly recognize patterns, which in the case of writing, can point out stuff you didn't even see.
In the end, the main issue, that caused all of this to become so large of an issue, is the availability and the way people people want to reach success with shortcuts. For some it works, for most, it won't. But the few that it works for, inspired the most that failed. This muddies whole industries with junk like AI-written books, online novels completely from AI, and more.
I think that with the advent of Google's Gemini now becoming more and more of an assistant and replacing the original assistant this month, Google has achieved basically what AI should be, A tool.
Chatgpt, Claude, and the others feel more like gimmicks that are easy to abuse. That's their only purpose. At least that's what it feels like to me. Granted I only used Chatgpt and a few minor other LLMs but still.
I think we are getting into the part of AI where people are starting to use it more and more as a tool to help than write everything, code everything, etc, etc, for them. Sure, many will still try and do it. But it is becoming harder to get away with it. When called out for it, more often genuinely, the consequences are harsher.
let's see where this all leads, but I think things are starting to clear up. New tech always has its starting hiccups. Something as large as AI will take a while longer before the two extremes settle down.