So, I am in a strange place with my posts right now regarding AI consciousness. I am actually taking the time right now to try to wrap my head around 1, how large language models like ChatGPT are trained and 2, how they actually do what they do.
I’ve sort of come to the place where I realize that in order to speculate (better?) on the subjects of how an AI of today might become conscious… (or maybe become some perfect souped up Turing Test passing equivalent), I need to understand them better. I have started this process, and surprising no one, it is REALLY hard for a humanities person to grasp. But.. I am actually making progress, and what I am discovering is absolutely fascinating! Once I get it, my intention is to have a series of posts where I explain what I learned. Trouble is, not exactly sure how long this is going to take…
So in the meantime I will (likely) be engaging LLMs on topics that aren’t (at least directly) relevant to what makes them tick, and posting them here.
Caveats:
Scenario One: I just can’t freaking figure out how they work well enough to make it worth your while to read my drivel and I come back to the topic of AI consciousness with my tail between my legs, disappointed but a better man for it somehow.
Scenario Two: I come across something while engaging in the learning process that is exciting enough to me that I can’t help posting about it.
Another Caveat, (only indirectly related to the caveats above), I am trying to learn how LLMs are trained and function mostly by asking LLMs questions, (sort of a reverse Socratic Method?), anyway, because of this there is always the fear that because of garbage in/garbage out, which I have covered in other posts, the results of my learning process might be flawed. I have my reasons for thinking on this topic at least I am going to be standing on firmer groun