Post 17: What is it exactly you do here (well)?
After the post that brought about the AI Existential Dread Meter, I went back to the conversation that contained the error, (you know, the error that made me think my ChatGPT usage was equivalent to that of the energy consumption of an entire American household?), to see if the model and I could work out what went wrong. I had a supposition going that it likely had something to do with its training and the way it gets new information. I think I was right?
The gist is, (if ChatGPT is to be believed), is that these models are pretty good, (frighteningly good?), at discussing and synthesizing information that has run through its training protocols in the form of its enormous training data set. Hence, want to have a discussion about Aristotle and Plato? Bingo! ChatGPT is your man! Music from the 70s? It knows the Jackson 5 discography like the back of its figurative hand. . . however, want to find out what the gross national product of Sweden is this year? ... oops, going to have some problems. Why? When it pulls new information from the net it basically just spits it out at you. It's basically as bad as googling. If you google something and get 5 different answers, ChatGPT is going to pick one of them and stick with it unless you call it out. Also, let’s say of the 5 answers, 1 is from the Swedish government, and the other 4 are from sWeDENFax!.booty, there is a chance that if for some reason one of the figures from sWeDENFax! are the first listed in the search, it may grab that figure instead the figure from the obviously better source. Will ChatGPT get better at “critical thinking” on the web? No idea. But be wary if you weren’t already.
See the convo below: