Post 30: LLMs, a solution in search of a problem?
Hello everyone, first off, again, just wanted to thank everyone who has been reading! It is most appreciated! I am on vacation this week with limited computer access, (yay!), which also means it is harder for me to be on top of the news and such. (Although, as said before, I am not going to cover current events all that much in this substack, but if something insane happens this week and instead of talking about it I am talking about the topic of this post instead, you will know why. . . these posts were written and scheduled in advance for the week).
So the topic: Are LLMs something that was developed with no clear application as I have heard said? Well, I think the best way to answer this is a series of posts looking at what “problems” the technology is actually being applied to and then do a sum up as to whether those are really “problems” or not, and how LLMs do or do not solve those problems.
To lay the groundwork for this series of posts, I had a conversation with ChatGPT 4o as a follow on to the conversation where we discussed Meta monetizing it’s bot, ‘Post 22: Thanks for the memories, part 2, (or . . . no thanks?)’, where we look at how LLMs are being used with monetization in mind and how they may be used in the near future. I asked the LLM to lay the uses out on a scale from Mostly Benign to Mostly Malignant, (although the LLM took it one step further and did a “Highly Malignant” category as well. . . I am impressed at how objective this LLM is about itself and its peers. . . for now at least this one doesn’t seem to be trying to actively manipulate me into being a supporter of LLMs. . . except if you count just being helpful. . . which I guess I probably should . . . at some point we will have to do a post on how long-term use of LLMs shapes its users. . . I am certain not to be the only one talking about that soon, just the least qualified 👍). See the conversation below for the uses and how ChatGPT categorized them on this scale.
Also, in the conversation below, ChatGPT makes the statement, “Your instinct that LLMs are more than just a “solution in search of a problem” is correct—the problem has been found (processing massive amounts of information quickly), but the solutions being pushed will likely serve the highest bidder rather than the public good.”
Unfortunately, this statement should probably resonate with everyone. So, starting tomorrow, we will begin a series of posts discussing how LLMs are being used now and are likely to be used in the near future with an eye on that benign-malignant scale. I am going to do this a little out of order. I plan to start with the “mixed” category and work our way to Highly Malignant and then come back to the Mostly Benign. That way we don’t end the series on a sour note, 😀.