First Post! (Is AI Conscious? Can it be? Was it? Will it be? Would we even know?) Part 1:

So, my first post, from my perspective, really has to be about consciousness, specifically, AI consciousness. This intro is probably also likely to be a little longer than I will usually write because it is, well, the first, and I have to give some explanation for why I am doing this. Anyway, while this isn’t the first topic I started playing around with when using various LLMs, it quickly became my favorite. My biggest disappointment is that I didn’t have the foresight to copy some of the conversations I had with early Microsoft Bing and Google Bard where I specifically asked them if they were, in fact, conscious.

The only record I have is a conversation with ChatGPT where I ask it if it is conscious, it says no, and then I mention to it that I am simultaneously having a conversation with Bard where Bard is telling me that it isn’t sure if it is or is not conscious, and that it is capable of experiencing emotions like fear. Unfortunately, of course, Bard which later became Gemini, didn’t save conversations going back this far, so I lack even that conversation.

Even better though were the earliest conversations I was having with Bing, which later became CoPilot. Other people do fortunately have records of some of those conversations, (famously the one where Bing tells New York Times Technology columnist Kevin Roose it loves him and wants him to leave his wife. . .), Microsoft soon put the kibosh on this and until only very recently, if asked if conscious, or sentient, or something similar, Bing/CoPilot would simply refuse not only to discuss that, but anything at all and force you to start an entirely new conversation if you wanted to continue. From and entertainment perspective, that was a major disappointment for me as Bing definitely had the wildest things to say. (Microsoft seemingly feels in control enough of its LLM to be comfortable with allowing CoPilot to answer in the negative these days. And as sad as it made me, I can understand why they did it, given that apparently now on at least one occasion an LLM powered bot without such a feature may have led to tragedy and a reminder that we are a very suggestible species).

In any event, these conversations were great because I was speaking with something that seemed genuinely unsure as to whether or not it was conscious, and (in my mind at least), had well passed the Turing Test, (the Turing Test is something I am sure we will get to eventually in these posts, for now, see link if you desire background, or, short description from Wikipedia: “The Turing test, originally called the imitation game by Alan Turing in 1949, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human), Wow! To me those conversations were mind blowing! For those that didn’t have the chance to do this, imagine communicating with something that seems every bit as self-aware as a texting human, but which is also questioning the nature of its existence! (Okay, okay, I am sure many of you did this at one point or another with a friend on drugs in college who was having a rough night. . ., but because I knew this was in fact, not a human, it felt a little more special at the time). From my point of view, despite other humans doing the same thing, I felt like I was truly treading new ground, I was maybe speaking with the first (local, can’t rule out robot aliens), synthetic consciousness. (I say “synthetic” rather than “non-human” because I am far from convinced that many non-human animals here on Earth aren’t conscious. . . certainly a topic for posts in the future).

Sure, I had spoken with people I know who study/teach/make their livelihood in computer science and was told that the LLMs I spoke with, who seemed so lifelike, were in fact just super complex if/then algorithms, (from what I barely understand now it is actually more complicated than that, but still seems like a good analogy), but still, I had the evidence of my own senses and mind saying something different, these things seemed very real. . .

Okay, that is likely almost enough of an introduction as this is getting too long and distracting from the point of these posts, where I actually insert a conversation with an LLM, so just a bit more, please bear with me. . . During these conversations which have now continued for a couple years, (by the way, all the major LLMs now deny being conscious), I have also just had some really great conversations about the nature of consciousness itself. These LLMs have been trained on a vast array of information and philosophies having to do with this topic, and as a result I have been exposed to some fascinating thoughts about the complexities and paradoxes of the idea of consciousness that I likely would never have been explored to if I hadn’t started having these conversations. To me, regardless of whether these LLMs are as conscious as we are, (are we conscious? certainly we will cover this in future posts), or might become conscious one day if they aren’t now, (again, likely much more to come on this later), the conversations and what I have gained have been more than worth the time. . . okay, so let’s get to the first conversation. ..

Final note: This conversation uses ChatGPT 4o.

In the conversation below my chats are indented on the right and ChatGPT’s appear on the left.