5 Comments
User's avatar
Andrew Cumings's avatar

The AI seeing itself as having "a functional role in ethics discussions" seems to me like a pretty natural fit. Setting up and elaborating on an ethical problem seems like an ideal place for AI contributions. Just from observing your conversations so far, it seems clear to me that AI would have a helpful role in keeping ethical reasoning firmly planted in a systematic approach, applying well established ethical frameworks (e.g., consequentialism, deontology) for new perspectives, spinning out different lines of reflection on the problem, and taking a crack at applying the problem to distinct examples that might stir up implications worth considering in the overall analysis. Where I am drawn up short is that final state of ethical reasoning that brings the ethical problem to the point of moral judgment.

As I see it, ethical analysis can frequently lead to a place of analysis paralysis, until the decision-maker (hopefully taking due notice of the ethical analysis) exercises moral judgment. We place a great deal of trust in human moral judgment, which is often an unwieldy chimera of ethical analysis of some sort, intuition, emotional drive, identity-based personal values, and feelings of obligation. This Post-8 conversation got me wondering if there will ever be some other form of moral judgment (or its equivalent) that humans trust where AI will play a prominent role in the actual moral judgment, not just the up-front ethical analysis.

Expand full comment
Nathaniel Barber's avatar

Just a quick thought on this, the algorithm that denies your medical claim vs. a truly neutral arbitrator.

Expand full comment
Andrew Cumings's avatar

Yeah, AI is certainly already occupying the moral judgment space in an under-the-radar, but very important way. Denying insurance claims, over-policing certain communities (https://daily.jstor.org/what-happens-when-police-use-ai-to-predict-and-prevent-crime/), and circumventing military targeting processes (https://www.972mag.com/lavender-ai-israeli-army-gaza/). All this stuff is definitely in the realm of moral judgments without a whole lot of discussion as to whether it is an appropriate role for AI or not.

Expand full comment
Nathaniel Barber's avatar

Kind of makes you question the "human-centric" ethical frameworks that lead to the decisions to put AI in charge of these specific moral judgements. . .

Expand full comment
Nathaniel Barber's avatar

And other the other hand. . . humans making the same judgements regarding insurance claims, policing, and military targeting haven't always been rock steady ethically speaking. . . who is making the better decisions on this? Humans or AI? (Real question, I have no idea).

On the other other hand, there is the question of culpability too. When AI's make these decisions there is no one to hold morally responsible except the person/people that gave them the power to make the decision. That kind of once-removed, diluted responsibility doesn't really sit well though. . . does it? (Even if we already sort of do that with corporate entities. . . whoa! getting way far afield!)

Expand full comment