Post Seven: The Ethics of this Substack
(or, Is it a mood point? . . . Maybe this is Meta-Post Two?)
Taking a break today from discussing AI intelligence, consciousness, intelligence, etc. to focus instead on an ethical question regarding AI that has been bothering me a fair amount lately. (No, not the question of AI becoming self-aware and taking over, and whether we should be doing this at all, plenty of time to cover that and whether we have any say about it anyway and whether it is a moot point, (or a mood point as I heard someone mistakenly say recently, it's a mood point, love it! And see the double parentheses here, or parentheses within parentheses, you know you are getting deep when you are in a parentheses nest!)) Instead, the topic is one that touches on the entire project here. What are the ethical issues for users (authors? Am I an author? Curator?) who directly publish conversations with LLMs?
This is 100% not the last time we discuss ethics here, (and when I say we, I mean me and the LLM I am chatting with for the post, the stating of which, from the discussion, as you will see, ChatGPT believes gives me some ethical clearance), but it will likely be the closest we come to trying to figure out if the format itself is okay or not. Not surprisingly, my co-author, (or collaborator, as ChatGPT repeatedly says. . . eww, I don’t like that word AT ALL, way too many upsetting associations that for various reasons I will probably have to cover here at some point but am not ready for yet on my first cup of coffee. . .), thinks I am largely at least in an ethical gray area in posting our discussions and hey, I’ll guess I’ll take that for now?
Oh, I suppose I should say what is/has been/will likely continue to bother me about this project as much as I am enjoying it. The problem of AI attribution. Not whether or not you are giving the AI credit. . . trying to pass off the AI’s work as your own. . . I’m not some kid here asking ChatGPT to write an essay on Thomas Jefferson for me, but the underlying issue of attribution, or lack thereof, for the training data used by LLM’s, and what that means for LMM users, and specifically, people like me who want to publish LLM conversations in their raw form.
Note 1: I hope it isn’t too disappointing, but I am well aware that the legal implications of the training of LLMs are far from being sorted out, (for example, see here), and we don’t cover that here. (Maybe a future post?)
Note 2: This discussion actually tags directly on discussion that formed the basis for my last post about the Turing Text.