Post 29: A (self) conscious Robot, what would it be like and how would we get there? Part A sub 1:
Slow your roll. . .
As promised way back in Post 11, I am finally getting back to this topic. However, not quite in the way I expected. The plan back in Post 11 was to try to figure out how a conscious AI might come about from where we are now and then follow that up with what that AI might be like and then finally whether it was a good idea.
After spending some time trying to figure out how LLMs actually work, (see Post N), I am forced to admit my approach was a little. . . optimistic? Naive? Foolhardy? Hey, I can admit when I am wrong and change course though, (I have a ton of practice lol, being wrong and changing course is practically second nature to me 🙂).
So. . . I am going to take what I learned and try to apply it. Suppositions:
Post-training LLMs are mostly static, i.e. they are doing rather than doing AND learning simultaneously. So unless some really bizarre emergent phenomenon happens, (can’t ever rule out anything completely), it seems very unlikely that a model like the one I use the most right now, ChatGPT 4o, is conscious or will spontaneously become conscious. (even if we haven’t really been able to define that yet. . . see the “What the hell is consciousness anyway?” series of posts that are ongoing). . .
Flowing from 1 above, some kind of dynamic learning process needs to be happening for an AI to take steps towards what we call consciousness, (but can’t quite define. . . yet? ever?).
This kind of dynamic learning is happening and the work that is being done on it is mostly seeking Artificial General Intelligence, (AGI).
If consciousness is to happen an intermediary step is likely some form of AGI.
So, the discussion below with ChatGPT starts with efforts being made today regarding continuous learning in AI and how that relates to AGI.
Additionally, I came up with a potentially better way to do continuous learning that the LLM didn’t consider, and then of course found out that the real world of AI research is way ahead of me. (This is to fully be expected, I didn’t go to MIT, I am not a genius, and I am not a computer scientist. . . so I am pretty forgiving of myself that I am playing catch up and will probably be doing so indefinitely).
Spoiler: A lot of the speculation in the discussion below moves towards giving models the ability to interact with the world in the way we do, with real time sensory input. I have no real idea if this is or is not a viable path, but it does most closely match how the creatures we are most convinced are conscious develop. . .
Note: The reason the discussion below looks familiar is that it actually picks up at the same place as the discussion posted in “Post 27: First addendum to ‘Post N: How an LLM is trained’,” however, it carries it a lot further.

