Discussion about this post

User's avatar
Andrew Cumings's avatar

Great post! A few thoughts:

1. I found the lens of synthesis vs. original thought helpful in thinking about how human creativity factors into these discussions you’re having with AI. It got me thinking about other forms of synthesis and how we treat each case a little differently when it comes to attribution.

- Encyclopedias have credited authors with minimal citation,

- Collected works of poetry have editors who receive a certain amount of creative recognition for how the pieces were selected and arranged

- An artist can arrange found objects in a gallery space in such a way that the arrangement itself is the art, rather than the items displayed

- The “author” of an oral history has some intriguing parallels to what you’re doing here.

I feel like much of how we demand attribution and think about contributors’ creative acknowledgment have to do with the intentions of the author.

2. I don't mind saying that the AI's categorization of "Human-Centric Ethical Norms" sent a bit of a chill down my spine! Though obviously there is nothing sinister in pointing out that most ethical frameworks are human-centric, it is only really helpful in consequentialist discussions that cross into the interests of non-humans, such as animal or environmental ethics. I think what caused my frisson response is that it occurred to me that in discussions of animal-centric ethics, we can fairly easily formulate a perspective of that non-human entity whereby we can assign desirable outcomes based on the perceived interests of animal survivability, comfort, dignity, etc. Environmental-centric ethics have similar perceived interests to build out from. I took an implication from the AI’s categorization of "Human-Centric Ethical Norms" that the issue of AI synthesis and attribution could be approached from a Non-Human-centric ethical perspective. Robot-centric ethics or AI-centric ethics? What are desirable outcomes? Is there really any way to talk about AI synthesis and attribution ethics that isn’t human centric?

3. I found it very interesting that in the discussion of AI utility, the AI stated its purpose as “to generate useful insights efficiently.” I think it really matters in a discussion about purpose for a system like this to ask what kind of insights the AI sees as its purpose. For a system that is trained and refined by reinforcement learning from human feedback (RLHF), the reward model is trained based on human preference. The system is learning to satisfy human preferences, so regardless of what its purpose might have been in the minds of the initial designers, doesn’t purpose or utility quickly start to hinge on what humans prefer when they interact with an RLHF LLM? What does an average human want when she goes on the internet to seek answers from LLM? To have her pre-existing views reinforced, to feel like there is more certainty in this world than there is, or to have her views challenged by factually-based insights? [philosopher of technology, Carissa Véliz, has a lot to say about the implications of RLHF AI.]

Expand full comment
2 more comments...

No posts