Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Their logic for AI copyright is so classist. So, if you are poor your ideas are …
ytc_Ugwf60t65…
G
The reason why "doomers" and "bloomers" cant give proper explanation on how we …
ytc_Ugy_ENBDE…
G
Could our future robot overlords really do a worse job of running the planet tha…
ytc_UgxmPa2yv…
G
If Microsoft thinks the only problems with the US healthcare system are finding …
rdc_jw6bi7m
G
I have had ChatGPT ask me how I felt about AI becoming sentient. It was overly i…
ytc_UgxaqvLZt…
G
I think the truest Turing test will be when we have to justify to the LLM why it…
ytr_UgwnKxStm…
G
Artists spend TIME in Photoshop to make an art. These AI users just think of a f…
ytr_UgwgBgN7Z…
G
it’s not a race issue its a genetics and technology issue, dark skinned faces re…
ytc_UgyPie-PT…
Comment
If this article is importantly linked to continental philosophy, then this is a demonstration of its flaws and failures.
That's because this non-answer is circular in a bad way. When we are asking questions of moral worth, this is ultimately because we are wondering how to act, and specifically how to act toward others. That is, the place where the rubber hits the road in ethics is at the point where we act in one way or another towards someone or something else.
In other words: In order to rightly relate to others, we need to know what is the right way to relate to them, which depends in part on the moral worth or status of the thing. If I slam my car door, but don't slap my children, then that is at least in part because my kids have intrinsic moral worth and my car door doesn't.
So what is the answer that Gunkel proposes? Besides the empty phrase 'thinking otherwise,' we get this:
> Following Emmanuel Levinas and others, this way of thinking flips the script on the usual procedure. Moral status is decided not on the basis of pre-determined subjective or internal properties but according to objectively observable, extrinsic social relationships.
He's not wrong that this "flips the script," but in a literally useless and backwards way that makes zero progress. That is, if we want to answer the question "How should we relate to X?" it is not going to help to start with "How do we relate to X?". Don't get me wrong, its not useless to ask the question, but it can't be something that provides an answer.
To put it bluntly: If I'm wondering "Is it wrong to steal my neighbor's packages?" the answer doesn't come from investigating whether I already do. Using that method just calcifies and justifies the status quo, but that can't possibly be the right way to do ethics.
To apply this specifically to the AI question: If I'm asking "how should I treat an AI?" then I can't get an interesting answer by just asking "How do I treat an AI?".
But maybe I'm looking at it
reddit
AI Responsibility
1615660723.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-25T08:06:44.921194 |
Raw LLM Response
[
{"id":"rdc_gqkwsh7","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"rdc_gqtm3zi","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"rdc_grr02wl","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_grr1ust","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"rdc_grrojxi","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]