Post 11 - A (self) conscious Robot, what would it be like and how would we get there? Part A:
(If we in theory wanted to, or if it just happened, or what just happened? . . .)
This post builds directly on the Post 10 where we defined for our purposes intelligence, sentience, and consciousness, (at this point autonomy doesn’t really enter the picture), but likely will in part B later on.
Here, ChatGPT and I start exploring how a conscious AI might come about from where we are technologically right now. It does not cover what that conscious AI might be like, (will probably be covered in Part B), or whether that would be desirable, (maybe Part C? We will have to see where Part B goes first. . .).
I’m not going to lie, this post is pretty long and has some tangents, but I do think it starts to drive towards how current efforts being made to create Artificial General Intelligence, (or AGI, which is also discussed in the post), could lead to conscious AI. Which I think is pretty interesting. . . plus just as a bonus, later on it will likely lead to more discussions about what such an AI, (whether conscious or not), might be capable of.
Lol, does anyone watch Futurama? I hope this isn’t becoming too much like the informational video about BigFoot in the Human Horn episode:
Narrator: “Bigfoot, Endangered Mystery! In the dense forests of the Pacific Northwest dwells the strange and beautiful creature known as Bigfoot, perhaps.
Sadly, logging and human settlement today threaten what might possibly be his habitat. Although if it's not, they don't. Bigfoot populations require vast amounts of land to remain elusive in. They typically dwell just behind rocks but are also sometimes playful, bounding into thick fogs and out-of-focus areas.
Remember, it's up to us. Bigfoot is a crucial part of the ecosystem, if he exists. So let's all help keep Bigfoot possibly alive for future generations to enjoy unless he doesn't exist. The end.”
-Futurama, Season 4, Episode 17: Spanish Fry
It looks like there is a term for this concern: model autophagy disorder, which is a phenomenon where an AI system begins to consume its own outputs excessively, leading to a degradation in its performance and reliability. Though I’m not sure those weaknesses would necessarily be a hurdle to the development of a kind of consciousness.
Briefly considering the discussion about those three routes to consciousness as distinct operations, SRFL seems to me to be the most fraught avenue of developing consciousness. When AI systems learn from their own generated content, they are likely to amplify errors and biases present in the earlier datasets. What sort of thinking about thinking might emerge from this self-referential cycle of data? I would think that the degradation in the quality of AI outputs could undermine thought operations that require precise understanding and contextual awareness, like distinguishing objects or generating coherent and contextually relevant “thoughts.” My idea doesn’t feel fully formed quite yet, but I think I’m getting close to what is bugging me about consciousness through self-referential learning.