Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
ib iro well even AI trucks will need to be repaired so there will always be a ne…
ytr_UgwC0HoqE…
G
@Mel-mu8ox There's a third option: Abolish intellectual property. NOBODY should …
ytr_UgwCAHQl8…
G
AI kills 90% of jobs which in turn kills 90% of consumers in turn kills 90% of b…
ytc_Ugw5VaOA3…
G
The “Oh look, an asteroid, I hope it wants to be friends” like is going straight…
ytc_UgzsRXCAl…
G
You managed to put into words why I don't like using "AI art looks bad" as an ar…
ytc_UgxGc4rG0…
G
No, on so many levels no. AI everywhere has downloaded this copy the moves in. T…
ytc_UgyxWM7P3…
G
The human truck drivers blocking our Fast-Lane , even the section that says Tr…
ytc_UgzuC85KO…
G
This is a deadly machine, not liking, Want No part in it, And the Lord (God) wil…
ytc_UgxYSzojA…
Comment
I might be completely off topic here, but as a follow up question (I am not the OP), you say:
> Something that doesn't have a will is probably not a moral agent. If so, we couldn't hold it responsible. However, it may still be an 'innocent' threat, like a rock on top of a building that could fall and hit someone on the head.
But, in a simplistic view, the one who placed the rock there could be held responsible. How about the creator of such a system?
Assuming at some point AI reaches that level of intelligence (and public usage) which could signify some danger (from decision making in self-driving cars to terminator), should the "creators" be held responsible?
And, in the same topic, if the creators should be held responsible, does their responsibility stop in case the system exhibits "will"?
reddit
AI Moral Status
1487177441.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | mixed |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_dds1kol","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"rdc_dds1ott","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"rdc_dds7hhz","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_dds1oe9","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"rdc_dds3r39","responsibility":"user","reasoning":"mixed","policy":"liability","emotion":"fear"}
]