This is a really fascinating discussion! Having spent a fair amount of time professionally thinking about heuristics and how to improve human decision making in high stress situations, I'm intrigued by the idea of an AI that can learn a specific decisionmaker's heuristic profile, biases, and aspirations for decision improvement. Imagine that tailored AI standing by as an analytical assistant to rapidly analyze that decisionmaker's potential decisions. It could instantly offer that decisionmaker elevated awareness to counterbalance known biases and heuristic leanings. While there would be obvious benefits for CEOs, military commanders, etc., I think it would be incredibly helpful for almost anyone to use for big decisions.
Thinking about this in the context of a high stress situation, (which seem to also come with extreme time constraints), is really interesting! Having a dispassionate neutral advisor could be very useful. However, imagining it used in this way, while it appeals to me intellectually, still makes my skin crawl for some reason. I think that is because instead of imagining it stripped of biases and heuristic leanings as you say, my mind instead snaps to something that advises without empathy. (One of my biases, no doubt fed by some of the imagery depicted on the Dread Meter). But. . . such an advisor could also have moral frameworks like utilitarianism and respect for human rights built in. . . lots to think about there as these models get more advanced.
This is a really fascinating discussion! Having spent a fair amount of time professionally thinking about heuristics and how to improve human decision making in high stress situations, I'm intrigued by the idea of an AI that can learn a specific decisionmaker's heuristic profile, biases, and aspirations for decision improvement. Imagine that tailored AI standing by as an analytical assistant to rapidly analyze that decisionmaker's potential decisions. It could instantly offer that decisionmaker elevated awareness to counterbalance known biases and heuristic leanings. While there would be obvious benefits for CEOs, military commanders, etc., I think it would be incredibly helpful for almost anyone to use for big decisions.
Thinking about this in the context of a high stress situation, (which seem to also come with extreme time constraints), is really interesting! Having a dispassionate neutral advisor could be very useful. However, imagining it used in this way, while it appeals to me intellectually, still makes my skin crawl for some reason. I think that is because instead of imagining it stripped of biases and heuristic leanings as you say, my mind instead snaps to something that advises without empathy. (One of my biases, no doubt fed by some of the imagery depicted on the Dread Meter). But. . . such an advisor could also have moral frameworks like utilitarianism and respect for human rights built in. . . lots to think about there as these models get more advanced.