Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The title is *_If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill …
ytr_UgxGY8thi…
G
Prices aren't fixed, though, and money doesn't have inherent value. If you doub…
rdc_d7krr3t
G
Look no matter how good the AI is it won't replace the real artist especially in…
ytc_UgypQxecw…
G
Humans prefer to deal with humans. If there is no jobs because of AI there is no…
ytc_UgwVmZy7c…
G
As someone who is chronically ill ((physically and mentally)), I've been very co…
ytc_Ugw03MA45…
G
I can understand the complaints about AI art, but people hating on AI Music are …
ytc_UgwovkLX7…
G
Actually there will be no exams because cost of ai will be 2$/h but a human doct…
ytr_Ugzo5VnnQ…
G
This guy is clueless! Take a look at what the godfather of AI says and Bernie S…
ytc_UgyOOV0jK…
Comment
Its not ai that has to be blamed for anything! Everything is result of our own manifestation
youtube
AI Moral Status
2025-09-19T23:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyPgDn4q5NWeqF9vr14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyKBcbGsrTILYhS0Yp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwGBWiXJutU9PvlS0l4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwpbNyViOP4kgODP-B4AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzS3S03Kt2q9-efBfR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxjjwB8X38pYjJPlil4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyxV1YOK0AE0aAPYEh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzzx1ouYIz8eRw5-Wh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgypsxgCNUj-6CQfbzN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyNP3YKwg4-Hhkhim14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}
]