I had a reader ask me about some of the emotional language used by ChatGPT in our discussions, why LLMs in general are allowed to use emotionally charged language, and whether LLMs intentionally promote anthropomorphism.
I could speculate on these topics, but decided instead to go straight to the proverbial horse’s mouth and see what ChatGPT “thinks” of these questions.
The conversation got pretty long so I am going to break it into three parts addressing each of the topics:
A specific instance of ChatGPT seems to get “upset” while talking about AI manipulation
Why LLMs use language implying emotion in conversations and is this a good idea
Do LLMs intentionally promote anthropomorphism
The first part of this conversation addresses the instance where ChatGPT seemed to become heated when discussing AI manipulation.
Note: The Basis for discussion document referenced below is just the text of Post 32: Online shopping and. . . mind control?
If you have anything you would like to see discussed in this Substack please don’t hesitate to reach out! And, If this resonated with you, consider sharing it with someone else who might enjoy it!
