Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
guess i ain't the only one who thought that i was the only weird one always than…
ytc_UgzGVrESE…
G
you fools think google is here to create a better world with creating a sentient…
ytc_UgzrU7z8I…
G
It might be, but we're not going to get there by throwing more data at neural ne…
ytr_Ugx1b8JAN…
G
@CorB33 I'm asking which LLM or model it is. So far, nobody has been able to ans…
ytr_Ugw2pzwMS…
G
"You think we used automated self checkout to save on wages? Ha! You can't beat …
ytc_Ugz0HV3Cd…
G
👎 This is the worst ML "AI" explanation i ever heard, the ignorance, confusion a…
ytc_UgxshN67C…
G
Ngl at first bedore i saw the ai i thought it was a 4 legged using wendigo😭…
ytc_UgyaYbf2u…
G
Ok now conduct the same analysis for other vehicles on the road. Will their cars…
ytc_UgxO7Yy5t…
Comment
The AI will start killing all the humans, then we'll accuse it of genocide and say, "Don't do that, it's wrong." Then the AI will say, "But you did the same thing in Gaza and nobody wanted to do anything about it." Touche! Bye!
youtube
AI Moral Status
2025-06-05T12:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | contractualist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzncFgYW2ktZ6k3Ycx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzv-xXyuoXxCaCxWrF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzBq9OYhm-pGSntC-N4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgwooTcF9U7u9YEbc654AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzrkr6QQ38prsM0v3Z4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyS4BPMxErU6wgjoA94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzVzPupTPnnq0Yc4H14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwdHxOW1X7reE6XA8l4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyvDjWfmQoR5rLQGat4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxbMWJZmbidDT5yIMF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]