It looks like there is a term for this concern: model autophagy disorder, which is a phenomenon where an AI system begins to consume its own outputs excessively, leading to a degradation in its performance and reliability. Though I’m not sure those weaknesses would necessarily be a hurdle to the development of a kind of consciousness.
Wow! That is interesting! I am very curious what happens/is happening/will happen as more LLMs start to get more external data to train on. I have only started to explore the the process by which these models are trained, (look for a post on this the near future!), but it seems like one way to avoid the phenomenon you mention is to continuously feed in new data. . . (just as an aside, I have also read some companies are complaining that they are running out of data to train their models on. . . that seems crazy!), sensory data seems like one way to go. . . (I can't say for sure at this point, I am still trying to grasp how a purely text based LLM is trained, I haven't even started trying to understand how data like sight, sound, and touch is integrated into these systems. . . other than to know it is happening. . .) Anyway, that is a long-winded way of saying it seems like there is an unlimited supply of real-world data to train on to avoid this. (Again, I'll have more to say on this when I understand more. . .).
Briefly considering the discussion about those three routes to consciousness as distinct operations, SRFL seems to me to be the most fraught avenue of developing consciousness. When AI systems learn from their own generated content, they are likely to amplify errors and biases present in the earlier datasets. What sort of thinking about thinking might emerge from this self-referential cycle of data? I would think that the degradation in the quality of AI outputs could undermine thought operations that require precise understanding and contextual awareness, like distinguishing objects or generating coherent and contextually relevant “thoughts.” My idea doesn’t feel fully formed quite yet, but I think I’m getting close to what is bugging me about consciousness through self-referential learning.
No, I think you are right here. Just "thinking about thinking" alone doesn't seem right. But in combination with new data coming in? Based on what I am learning about how LLMs are actually trained I already think this particular post is going to have to be revisited with some of that in mind.
It looks like there is a term for this concern: model autophagy disorder, which is a phenomenon where an AI system begins to consume its own outputs excessively, leading to a degradation in its performance and reliability. Though I’m not sure those weaknesses would necessarily be a hurdle to the development of a kind of consciousness.
Wow! That is interesting! I am very curious what happens/is happening/will happen as more LLMs start to get more external data to train on. I have only started to explore the the process by which these models are trained, (look for a post on this the near future!), but it seems like one way to avoid the phenomenon you mention is to continuously feed in new data. . . (just as an aside, I have also read some companies are complaining that they are running out of data to train their models on. . . that seems crazy!), sensory data seems like one way to go. . . (I can't say for sure at this point, I am still trying to grasp how a purely text based LLM is trained, I haven't even started trying to understand how data like sight, sound, and touch is integrated into these systems. . . other than to know it is happening. . .) Anyway, that is a long-winded way of saying it seems like there is an unlimited supply of real-world data to train on to avoid this. (Again, I'll have more to say on this when I understand more. . .).
Briefly considering the discussion about those three routes to consciousness as distinct operations, SRFL seems to me to be the most fraught avenue of developing consciousness. When AI systems learn from their own generated content, they are likely to amplify errors and biases present in the earlier datasets. What sort of thinking about thinking might emerge from this self-referential cycle of data? I would think that the degradation in the quality of AI outputs could undermine thought operations that require precise understanding and contextual awareness, like distinguishing objects or generating coherent and contextually relevant “thoughts.” My idea doesn’t feel fully formed quite yet, but I think I’m getting close to what is bugging me about consciousness through self-referential learning.
No, I think you are right here. Just "thinking about thinking" alone doesn't seem right. But in combination with new data coming in? Based on what I am learning about how LLMs are actually trained I already think this particular post is going to have to be revisited with some of that in mind.