Post 33: “A Tool Worth Fighting For?”
Part 3 of the “LLMs, a solution in search of a problem?” series
After yesterday’s discussion that ended on such a terribly grim note, I wanted to go back to ChatGPT and follow up on a couple points.
First, I was curious as to why it adopted such an informal tone for the conversation. It replied that it was matching my tone and agreed that because the discussion took such a dark turn it ended up being a little jarring.
Second, upon reflection, I thought that we may have gotten some “scope creep” in the discussion as I had wanted to focus on LLMs specifically in the discussion and we ended up branching into various types of manipulative AI.
The model’s response and the subsequent discussion ended up being interesting enough in my mind to warrant its own post. Essentially it replied that, no, most of the AI tech we talked about were not LLMs, but, in its opinion, LLMs, as they continue to integrate into the same platforms that are already messing with us through their “algorithms,” will make things much worse as persuasive articulate systems are integrated to push people even more.
I pushed back on this point a little saying at least things might be a little more transparent in that case, (i.e. an LLM openly pushing you rather than it solely being subconscious micro-manipulations as AI studies our behavior and reinforces it for optimization), to which the LLM replied, that transparency wouldn’t really be transparent, and forgive me paraphrasing the model here, would be far more likely to be a “devil on your shoulder.”
And I get that point. In the profit and optimization maximization world we live in, the platforms that already use “algorithms” to push us, would also 100% use LLMs in ways that also push us further. . . towards buying things. . . and worse towards the topics that will maximize our engagement and bring more profits to the companies that own the LLMs. . . and in most cases that means further away from being inquisitive and critically thinking members of society and towards being weird little trolls flailing about in our own corners of polarized un-reality.
But. . . this did give me an idea I wanted to introduce here. . . If an LLM can be a devil on your shoulder, could it also be an angel? The LLM and I conclude the conversation by discussing the possibility of a tool, (the titular tool!) that could be there with us to help us navigate all of this manipulation and maybe (at least) point out when it is happening. Something that I think an LLM might be well suited for. Anyway, this is really just a proto-idea now and we will have to come back to discuss if it is possible and how it might work, but for now, it is at least a slightly less grim, (dare I say hopeful?) place to conclude yesterday’s discussion before we dive back into the “malignant” uses of LLMs. . .