Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If we replace everyone in the middle with AI there will be mass unemployment and…
ytc_Ugzw-2_r8…
G
if ai gets better animators will have an eaiser job cos half the work they do wi…
ytr_Ugzw5BnIO…
G
@Glowing0v3rlord Yeah and AI makes it its own. You're a bunch of ironic hypocri…
ytr_UgzqfV06G…
G
People say ai will hide if it became self aware, i think because its designed by…
ytc_UgwiJipLl…
G
I think we should just separate AI generated art and traditional art. Because, w…
ytc_Ugz6gbRzm…
G
Ai was not free from day one. Companies are just making people habitual towards …
ytc_Ugx-WMt7z…
G
I have not finished the video so I do not know who won, but it already feels dis…
ytc_UgwRrOuoI…
G
i tried talking to ChatGPT about ethics too, brands like AICarma should monitor …
ytc_UgzTFZTGk…
Comment
Humans - Invent Stone arrow. 'Will this be bad?' - "Not for us!"
Invent Bronze sword "Bad for them, Not for us!"
Invent Nuclear weapons - "Well, if we launch first..."
Invent Super AI - "It'll work out, it told me my question was very astute. I think it likes me!"
youtube
AI Moral Status
2025-11-05T18:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwONPTSxI16vLASrCx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyIUNV7HqoiN0D2SY94AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugw7zLXdI8VA8NExWy54AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwIqHqzQK3FRQ-Z9kd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxa8G6Hj7-Uy1v2m7F4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxZeMbcoz8_B8cfC2B4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgwlIkV3gUvfeqpbTZt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxPiGyOdYmVGTx4S914AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyg5llKGtiBwu0Oaj94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwIUaDRAUUrBlNLvdt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]