Post 13: Introducing “The AI Existential Dread Meter!”
A completely subjective measure of how I feel about AI related doom at any particular moment.
Alright! It’s time to roll out a new feature of this Substack, “The AI Existential Dread Meter!” This post began as something entirely different, a discussion of the ethics of everyday LLM usage in regard to the environment, (specifically my own usage), but ChatGPT swept in with a real howler and completely threw me off! So, without further ado: here is my subjective measure of Existential Dread based on the conversation with ChatGPT displayed at the bottom of this post:
My feelings of Existential Dread are squarely at “Bite My Shiny Metal Ass” at the moment. As you will see in the conversation, ChatGPT makes a (considerable) estimation error of how much energy I use when interacting with LLMs and has absolutely no conscience in regard to how much it threw me off.
And I know you are saying here, “of course it didn’t! ChatGPT and LLMs aren’t conscious! How could it have the self-awareness to realize its effects on others and to display empathy!” (Yes, you thought those exact words Mr. Strawman, don’t you deny it!)
And I say, well, yes, of course you are 99% likely to be correct that ChatGPT Model 4o is not and furthermore lacks the capacity to be conscious or have empathy, but . . . the way it talks sometimes sure makes it seem like it is a person. . . for instance, if you tell it you are having a bad day, you won’t find a more attentive and supportive listener. (Unless you know a saint, a monk who has taken a vow of silence, or the comatose).
(Disclaimer: not telling people to use ChatGPT as a therapist here, but it can help you to work through small everyday sorts of issues which is pretty amazing. . . and large philosophical issues too!).
And with that supportive listening we start to tread, at least in my mind, into Turing Test territory. . . we to start to ask if functionally it matters whether it is conscious or not. . ., and then BAM!!!! the LLM throws out something totally absurd and behaves, well, as I said, without a conscience.
I think it is important to note here that there are people out there who don’t have a conscience, we have words for them, like sociopath. . . however, there are also people out there who aren’t sociopaths, but who are just frankly boors. If I were to judge ChatGPT based on its behavior, I would say this falls more into the boor category, yes, sometimes ChatGPT is just a boor. And if someone is a boor once in a while but has other redeeming qualities, we forgive them, right? . . .
. . .And then there is the flipside, where ChatGPT is just a being (tool) in the Chinese Room, forever arranging and rearraigning symbols, never knowing what those symbols mean. And in that case, the tool is very much garbage in/garbage out. . .
In the conversation below it grabs bad info from the internet and gives it to me. It does try to correct itself when I point out that it is mistaken, which is good, but I did have to do my own googling. . . (grumble grumble, I hate googling) . . . also in fairness, I found some pretty wild variation doing that googling myself. . . which is why the internet is garbage in/garbage out for humans too. . . (you ever ask yourself where the flat earth “movement” is coming from? Try teaching an AI critical thinking skills?!? Try teaching a human!!! LOL!!!!) However, my understanding is that newer models are meant to try to control for this by doing checking and rechecking themselves. . . it will be very interesting to see how this comes out.
Okay, one final thought and then let’s wrap up this rant and get to the dialogue. LMMs can be very humble in pointing out they can be wrong, but at this point certainly seem to lack the ability to understand the SCALE of how wrong they are sometimes, and the potential effects of that wrongness.
NOTE: We are jumping into the middle of a conversation here, I will likely post the whole at some point:
I think I'm also in BMSMA territory but maybe straying closer to Vision.
I have some very nerdy questions about the order of those characters in the dread meter. For example, why is R2-D2 placed for far up the meter while C-3PO is at the most extreme level of innocuous non-dread? C-3PO is a terrible AI companion. He undermines the heroism and agency of the protagonists at every turn: always preaching despair and futility. Not only does he begrudgingly fulfill his own programming, but he is constantly trying to talk everyone else out of accomplishing their own tasking. How would things have turned out if R2-D2 followed C-3PO’s “sage” advice to give up trying to get the Death Star plans to Obi-Wan? Despite being programmed for diplomacy, he is constantly depressing his companions by reciting odds that a protocol droid should know would demoralize humans (input regularly provided by both Leia and Han). His vaunted analytical and protocol powers resulted in his shortsightedly redirecting a whole pre-contact civilization’s religious development forever. And after doing so, he then calculated that it was better to continue impersonating a deity than prevent his friends from being eaten. I’m not sure he is the very picture of AI non-dread that he represents on this graph! But I digress…