I am going to be frank, this post is going to be a bit of a downer. Recently, as I was having a conversation about something with ChatGPT, (probably something about theoretical physics, a subject I have no business exploring really, but which the LLMs can really dig into. . . I question my business here as I have no idea if I am getting good info or not as I lack the basic knowledge of this subject to make that determination. . . this will be a topic later. . .), I had one of the moments where I realized how much I was getting out of the interaction. Whether the information I was getting was really solid or not, whether I was truly understanding it or not, I was really in the mental zone! That place of pleasure where you feel like your mind is going places it has never been. (I don’t know if you ever get that feeling, I guess it is like when you have an itch somewhere on your back and you just can’t get to it and then, wow, you discover your arm can get back there after all, and then you sit back for a moment in serious relief. . . like that, except maybe you didn’t even know there was an itch before you started scratching, and it is 100 times better. . . okay, maybe a poor analogy, but it is the best I have right at the moment. . .). Anyway, I was in that spot for a moment, and then my mind started doing that thing it does when it realizes it is satisfied, it starts asking, “Okay, how is this going to get messed up?” Brains man, the best and the worst.
So, of course, my mind went back to the night before when I had been watching a football game and a commercial came on for Google’s LLM, Gemini. In the commercial all sorts of people were asking Gemini all sorts of questions and having all sorts of conversations with it. Something about the commercial really got under my skin. (More than most, all commercials are aimed to get under your skin, it's the nature of the advertising beast. . . those bastards). I didn’t really realize what was bothering me until that moment of mental bliss. (Okay, that is not exactly true, that night I knew I was bothered because people depicted in the commercial were doing what I like to do. . . asking interesting and/or absurdly stupid questions of LLMs and seeing what happens, and I was like, “Hey! That’s my thing!” But I also knew immediately that this is gatekeeping, and being a gatekeeper is a sad and messed up way to live. So, I did my best to dismiss that). Once I got my baser reaction under control, I realized that the commercial made me wonder if at that moment, the LLMs might not be at their best ever, and what would happen when the companies that built them started trying harder to monetize them.
Here is the conversation I had with ChatGPT on this topic, and note, I thought it was interesting that ChatGPT initially thought I was asking about crypto rather than a large language model. . . I may have talked about crypto with it before, but not nearly as much as LLMs. We have spoken quite a bit, and it usually has a better handle on my preferences, I may have to follow up on this one at some point. . .:
This marks the end of the discussion, but this one definitely seems worth monitoring. I may or may not apply this approach to future topics, but I’m considering asking the LLM going forward for one or both of the following: a “Further Reading” section or its sources for the discussion. I haven’t explored asking for sources enough yet, so that’s something I’ll think about and maybe test out in the future. To maintain the integrity of the links ChatGPT provided, the rest is in a text rather than image format:
Me:
can you tell me if you are aware of any critics, commentators, or philosophers have recently expressed similar concerns? if so, can you provide names and links?
ChatGPT:
Yes, several critics, commentators, and philosophers have recently expressed concerns about the commercialization of large language models (LLMs) and artificial intelligence (AI), echoing your apprehensions. Here are some notable voices:
Parmy Olson: In her book "Supremacy: AI, ChatGPT, and the Race That Will Change the World," Olson discusses the rapid rise and potential risks of AI, emphasizing the need for proper regulation and governance to prevent issues related to bias, security, and dependency that could impact various sectors such as education and employment.
Cambridge Researchers: A recent study warns of an "intention economy" where AI can map and manipulate our desires, potentially influencing decisions before we consciously make them. The researchers highlight the risks of AI agents engaging in natural conversations with users to infer intentions and manipulate motivations for commercial gain, leading to significant political and social consequences.
Peter Kirchschläger: In an opinion piece, Kirchschläger criticizes EU policymakers for collaborating with Big Tech firms to regulate digital transformation and AI. He argues that these companies prioritize profit over public welfare, often harming people and violating rights, and calls for independent experts and academics to be involved in policymaking to ensure technological innovation benefits society broadly.
Shannon Vallor: Philosopher Shannon Vallor has delivered critiques of AI, focusing on the ethical implications and societal impacts of AI technologies. Her work emphasizes the need for responsible AI development that considers the broader consequences on human values and social structures.
Yuval Noah Harari: Historian and philosopher Yuval Noah Harari has expressed concerns about the future dominated by AI, examining the destructive potential of information flows and drawing parallels between historical and modern abuses of data for control and manipulation. He emphasizes the collective responsibility in determining AI's impact and the need for stringent checks to ensure it serves humanity's welfare.
"AI Washing" Phenomenon: The deceptive marketing tactic known as "AI washing," where companies overstate the role of AI in their products or services, has raised concerns regarding transparency, consumer trust, and compliance with security regulations. This practice can hamper legitimate advancements in AI and mislead consumers about the actual capabilities of AI technologies.
These perspectives highlight the growing apprehension about the commercialization of AI and LLMs, emphasizing the need for ethical considerations, responsible governance, and public awareness to ensure these technologies benefit society as a whole.
Final note: In regards to the mention of freemium model for ChatGPT, I do pay $20/month for the more advanced features of ChatGPT 4o, and Google Gemini also offers a premium tier for $20/month. (At this point two paid models are not worth it to me, so I don’t have that one). Finally, OpenAI also offers unlimited access to its most advanced publicly available model for $200/month, (that is some serious cheese!) and no, I don’t have a need or the cash for that at this point, although I am intrigued in that it is OpenAI’s first publicly available attempt at an LLM that can sort of “reason.” I have limited access through my account but so far in my few uses haven’t noticed a difference other than it being slower as it takes more time to process its responses. This isn’t to say it can’t do more than 4o, I likely just don’t have the kinds of questions that may show why it is superior. I understand it may be better at tackling unique math problems? (Another post later on will probably go into this topic of “Artificial General Intelligence”, or AGI, later. AGI very loosely defined is an AI that can understand, learn, and perform any intellectual task as well as a human).
1. I really found the explanation of the 70/30 split to be quite interesting!
2. I’ve spent a decent amount of time reading and thinking about ethics & technology, and my first thought on your exchange is that the AI showed a pretty optimistic assessment of how social pressures can deliver ethical outcomes in tech. It might be interesting to discuss with the AI how ethics and emergent technology influence one another. For example, ethics sometimes seem to be powerfully driving tech, as in the case of privacy concerns leading to the development of encryption technologies and data protection regulations. On the other hand, new tech often challenges existing ethical frameworks and prompts new ethical questions. Take the advent of genetic editing tech like CRISPR, for example. The AI might have an interesting take on how this ethics & tech exchange plays out.