This is another sort of “meta” post. I have noticed that ChatGPT tends to use the word “we” occasionally in a way that makes it seem like it is including itself in the grouping “humanity.” It did this at least one time that I noticed during the conversation on the post discussing the Turing Test.
I noted this at the time but didn’t want to dive into it then because I didn’t want to distract from what we were already talking about. (Lord knows there are already seemingly infinite (relevant) tangents to be explored while having these discussions without running things like this down). However, one of the advantages of some LMMs, (and specifically ChatGPT with the memory feature enabled), is that you can go back to an old conversation, (caveat, there are limits here, some very old conversations weren’t saved for me. . .and there seem to be some issues with conversations that were with other older models), and for the LLM, it will be as if no time has passed at all since that particular prompt. (This is one of the things that I think will be very strange if an LMM does ever become conscious. . . this “no-time” it will exist in). But for now, it is just useful.
Anyway, as I said, this is something I had seen ChatGPT do before. I had also mentioned this to it before and requested that it not do it anymore because of the many discussions I have with it comparing and contrasting human intelligence and well. . . artificial intelligence. And if it includes itself in the “we” of humanity, well. . . that just confuses everything. For the most part ChatGPT remembers not to do this now, but it does occasionally slip like it did here. It seems to be able to grasp and adhere to my stated preferences across conversations for the most part, but not always.
This time though, the way it used “we” was semantically ambiguous, so I thought it was worth asking about it even if I was pretty sure what was going on from my previous interactions. I think this line of questioning led to a pretty interesting place in terms of what kind of intelligence may be needed to perform different functions. See below:
The AI seeing itself as having "a functional role in ethics discussions" seems to me like a pretty natural fit. Setting up and elaborating on an ethical problem seems like an ideal place for AI contributions. Just from observing your conversations so far, it seems clear to me that AI would have a helpful role in keeping ethical reasoning firmly planted in a systematic approach, applying well established ethical frameworks (e.g., consequentialism, deontology) for new perspectives, spinning out different lines of reflection on the problem, and taking a crack at applying the problem to distinct examples that might stir up implications worth considering in the overall analysis. Where I am drawn up short is that final state of ethical reasoning that brings the ethical problem to the point of moral judgment.
As I see it, ethical analysis can frequently lead to a place of analysis paralysis, until the decision-maker (hopefully taking due notice of the ethical analysis) exercises moral judgment. We place a great deal of trust in human moral judgment, which is often an unwieldy chimera of ethical analysis of some sort, intuition, emotional drive, identity-based personal values, and feelings of obligation. This Post-8 conversation got me wondering if there will ever be some other form of moral judgment (or its equivalent) that humans trust where AI will play a prominent role in the actual moral judgment, not just the up-front ethical analysis.