Post Seven(a): The Ethics of this Substack
(or, Is it a mood point? . . . Maybe this is Meta-Post Two?)
Note: Apologies for this repost, it came to my attention that I didn’t have the image of the conversation sized correctly to appear in the email. I should have this fixed now. . . thank you again all for being guinea pigs for this as I figure it out. . . New post tomorrow!!!
Taking a break today from discussing AI intelligence, consciousness, intelligence, etc. to focus instead on an ethical question regarding AI that has been bothering me a fair amount lately. (No, not the question of AI becoming self-aware and taking over, and whether we should be doing this at all, plenty of time to cover that and whether we have any say about it anyway and whether it is a moot point, (or a mood point as I heard someone mistakenly say recently, it's a mood point, love it! And see the double parentheses here, or parentheses within parentheses, you know you are getting deep when you are in a parentheses nest!)) Instead, the topic is one that touches on the entire project here. What are the ethical issues for users (authors? Am I an author? Curator?) who directly publish conversations with LLMs?
This is 100% not the last time we discuss ethics here, (and when I say we, I mean me and the LLM I am chatting with for the post, the stating of which, from the discussion, as you will see, ChatGPT believes gives me some ethical clearance), but it will likely be the closest we come to trying to figure out if the format itself is okay or not. Not surprisingly, my co-author, (or collaborator, as ChatGPT repeatedly says. . . eww, I don’t like that word AT ALL, way too many upsetting associations that for various reasons I will probably have to cover here at some point but am not ready for yet on my first cup of coffee. . .), thinks I am largely at least in an ethical gray area in posting our discussions and hey, I’ll guess I’ll take that for now?
Thanks for reading (Un?)Conscious Constructs! Subscribe for free to receive new posts and support my work.
Oh, I suppose I should say what is/has been/will likely continue to bother me about this project as much as I am enjoying it. The problem of AI attribution. Not whether or not you are giving the AI credit. . . trying to pass off the AI’s work as your own. . . I’m not some kid here asking ChatGPT to write an essay on Thomas Jefferson for me, but the underlying issue of attribution, or lack thereof, for the training data used by LLM’s, and what that means for LMM users, and specifically, people like me who want to publish LLM conversations in their raw form.
Note 1: I hope it isn’t too disappointing, but I am well aware that the legal implications of the training of LLMs are far from being sorted out, (for example, see here), and we don’t cover that here. (Maybe a future post?)
Note 2: This discussion actually tags directly on discussion that formed the basis for my last post about the Turing Text.
Great post! A few thoughts:
1. I found the lens of synthesis vs. original thought helpful in thinking about how human creativity factors into these discussions you’re having with AI. It got me thinking about other forms of synthesis and how we treat each case a little differently when it comes to attribution.
- Encyclopedias have credited authors with minimal citation,
- Collected works of poetry have editors who receive a certain amount of creative recognition for how the pieces were selected and arranged
- An artist can arrange found objects in a gallery space in such a way that the arrangement itself is the art, rather than the items displayed
- The “author” of an oral history has some intriguing parallels to what you’re doing here.
I feel like much of how we demand attribution and think about contributors’ creative acknowledgment have to do with the intentions of the author.
2. I don't mind saying that the AI's categorization of "Human-Centric Ethical Norms" sent a bit of a chill down my spine! Though obviously there is nothing sinister in pointing out that most ethical frameworks are human-centric, it is only really helpful in consequentialist discussions that cross into the interests of non-humans, such as animal or environmental ethics. I think what caused my frisson response is that it occurred to me that in discussions of animal-centric ethics, we can fairly easily formulate a perspective of that non-human entity whereby we can assign desirable outcomes based on the perceived interests of animal survivability, comfort, dignity, etc. Environmental-centric ethics have similar perceived interests to build out from. I took an implication from the AI’s categorization of "Human-Centric Ethical Norms" that the issue of AI synthesis and attribution could be approached from a Non-Human-centric ethical perspective. Robot-centric ethics or AI-centric ethics? What are desirable outcomes? Is there really any way to talk about AI synthesis and attribution ethics that isn’t human centric?
3. I found it very interesting that in the discussion of AI utility, the AI stated its purpose as “to generate useful insights efficiently.” I think it really matters in a discussion about purpose for a system like this to ask what kind of insights the AI sees as its purpose. For a system that is trained and refined by reinforcement learning from human feedback (RLHF), the reward model is trained based on human preference. The system is learning to satisfy human preferences, so regardless of what its purpose might have been in the minds of the initial designers, doesn’t purpose or utility quickly start to hinge on what humans prefer when they interact with an RLHF LLM? What does an average human want when she goes on the internet to seek answers from LLM? To have her pre-existing views reinforced, to feel like there is more certainty in this world than there is, or to have her views challenged by factually-based insights? [philosopher of technology, Carissa Véliz, has a lot to say about the implications of RLHF AI.]