Post 37: LLM “feelings” and “humanity”, Part 2
This post continues the discussion started yesterday in Post 36: LLM “feelings” and “humanity,” which began as a series of questions from a subscriber. Yesterday’s post covered a specific instance of where ChatGPT seemed to get “upset” while talking about AI manipulation, today’s post covers why LLMs use language implying emotion in conversations and whether this is a good idea, and Monday will conclude the conversation with the question of whether LLMs intentionally promote anthropomorphism.
If you have anything you would like to see discussed in this Substack please don’t hesitate to reach out! And, If this resonated with you, consider sharing it with someone else who might enjoy it!
