Discussion about this post

User's avatar
Andrew Cumings's avatar

It looks like there is a term for this concern: model autophagy disorder, which is a phenomenon where an AI system begins to consume its own outputs excessively, leading to a degradation in its performance and reliability. Though I’m not sure those weaknesses would necessarily be a hurdle to the development of a kind of consciousness.

Expand full comment
Andrew Cumings's avatar

Briefly considering the discussion about those three routes to consciousness as distinct operations, SRFL seems to me to be the most fraught avenue of developing consciousness. When AI systems learn from their own generated content, they are likely to amplify errors and biases present in the earlier datasets. What sort of thinking about thinking might emerge from this self-referential cycle of data? I would think that the degradation in the quality of AI outputs could undermine thought operations that require precise understanding and contextual awareness, like distinguishing objects or generating coherent and contextually relevant “thoughts.” My idea doesn’t feel fully formed quite yet, but I think I’m getting close to what is bugging me about consciousness through self-referential learning.

Expand full comment
2 more comments...

No posts