Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Lots of people claiming AI has a 'soul'. But it hasn't been shown to me that suc…
ytc_Ugzx43xSg…
G
You are saying everything but, the the fact that democratic and republican polit…
ytc_UgxkbJk5q…
G
No tension for us coz we don't research about AI and we don't care 😂…
ytc_UgyDNJluI…
G
I definitely feel terrible for this woman and agree with the dangers of these AI…
ytc_Ugxo0zqtb…
G
His predictions are very optimistic, i dont think AI would develop THAT quick ev…
ytc_UgxMUh-ik…
G
I don't think AI will destroy humanity (as we know it)... I think people will by…
ytc_UgwOrqi0p…
G
This. I don't mind using AI for draft illustrations, but for anything I want to …
ytr_UgxGc4rG0…
G
Driverless trucks are going to help with climate change? You hear "climate chang…
ytc_Ugzoord8O…
Comment
Thank you so much for pointing that out. I'm not at all familiar with Levinas, so I will gladly believe you that he doesn't make that argument.
But if you're the "Presidential Teaching Professor of Communication Studies" and you're writing a popular philosophy article, I will give you very little charity if what you're actually arguing doesn't say what you meant it to say on account of your poor communication.
That is, nothing in Gunkel's article gives me reason to think I misunderstood him. And if you're going to come in with a triumphalist headline like that, you better say clearly what you mean.
And Gunkel sums his point up like this:
> In the end, the question concerning the moral status of AI is not really about the artifact. It is about us and who is included that first-person plural pronoun, “we.” It is about how we decide—together and across the differences of human experience—to respond to and take responsibility for our world.
First, and I can't not point it out, this is also circular in that he was supposed to define who the 'we' us, but used "us" in defining who it is up to. That is, if AI should be part of the moral community, then if I took his idea seriously that would have to include how AI treats us and other AI, not just "we humans". But really, substantively using 'us' to refer to the moral community in a discussing where the question at issue is who all is included in the moral community is just bad.
But besides that, his conclusion seems to me to say what I thought he said, and not the more nuanced point you're attributing to Levinas.
More specifically, he says this:
> The question of moral status does not depend on what something is but on how it stands in relationship to us and how we respond to it.
I will grant that this sentence alone could indicate that we need to derive the ethical oughts from the actual relationship without simply reading off how we do respond as the way we ought to. For example, to find out "how should we
reddit
AI Responsibility
1615669690.0
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_gqt7xtl","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"rdc_gqzorkg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"rdc_gqulz53","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"rdc_gqu5do0","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"rdc_gqu2yzp","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"outrage"}
]