Post 19: Thank you. . . for the memories. . .
(Or another Feature Creature. . . so sorry for that. . .)
(Apologies in advance for the image, it is a very deep cut from Danger Mouse. Either you watched the show and get it, and if you did you are also probably warped enough to think it’s funny, or you didn’t and I won’t bore you by trying to explain it and get to the end of the explanation and have you say, “I can’t believe I took the time to read that. . . I’ll never get those minutes back. . .”. . . and hopefully you don’t think that about this entire post :).
Taking the time today to discuss another interesting feature of ChatGPT. ChatGPT, if you don’t opt out, will retain “memories” of your preferences across different conversations. Sort of adapt in real time to how you like to converse and what you like to converse about. . . it sort of, well, gets to know you. As far as I can gather from poking around a little, ChatGPT is currently the only “major” LLM that does this, (I checked Gemini, Co-Pilot, and Claude directly. . . I also asked about X’s Grok and Meta AI which I have never used personally and basically know nothing about. . . and the answer I received was they don’t have it either. There may be other LLMs that do this? But I don’t know about them).
This honestly may spook people, do you really want an AI learning and remembering so much about you? It certainly could be viewed as a trade-off. . . (but other than Claude, I am guessing the other AI’s also do this but just aren’t up front about it like ChatGPT is. PLEASE NOTE: I am just speculating here, I have no way to actually know this. . .don’t sue me) . . . a trade-off between (maybe?) privacy and continuity, (I say maybe about privacy assuming you are using an LLM. . . or really doing anything online. . . we all know at this point that all of our data is being hoovered up and used to sell us shit, right? We all know this? You get the canoe adds five minutes after you talk to your spouse about buying a canoe, right?)
Anyway, there is another potential trade off from this feature I see as possible. That is between the LLM being helpful and pleasant to you and being neutral. That by personalizing your experience based on your preferences too much the LLM ceases to be objective and steers conversations towards pleasing you rather than giving you good info. (Hey, at least the attempts to please at this point haven’t turned into trying to sell you a canoe, right?).
In this discussion ChatGPT and I discuss what the memory feature is, what it does, how it works, what you can do about it if you don’t like it, its potential pitfalls, and how those pitfalls can or cannot be mitigated. No final answer here as usual, but some ideas. . .
This is a really fascinating discussion! Having spent a fair amount of time professionally thinking about heuristics and how to improve human decision making in high stress situations, I'm intrigued by the idea of an AI that can learn a specific decisionmaker's heuristic profile, biases, and aspirations for decision improvement. Imagine that tailored AI standing by as an analytical assistant to rapidly analyze that decisionmaker's potential decisions. It could instantly offer that decisionmaker elevated awareness to counterbalance known biases and heuristic leanings. While there would be obvious benefits for CEOs, military commanders, etc., I think it would be incredibly helpful for almost anyone to use for big decisions.