Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm pretty sure secret lizardmen couldn't even dream of being as awful as AI-bro…
ytr_Ugxb2U3B4…
G
Why'd you need AI to tell you that? XD Also I'm suspicious of Brave because it'…
rdc_o6aw317
G
One name/term I think perfectly contradicts a lot of these "AI isn't bad" argume…
ytc_UgzTwZqPD…
G
A.I has no feelings or moral compass and it will ALWAYS become evil. Its also pa…
ytc_UgysfBdvk…
G
The skin on the face needs to be thinner. And there needs to be flexible silicon…
ytc_UgwjaYJti…
G
Charlie, the real intelligent guy did a fantastic job to explain artificial inte…
ytc_Ugy3Tfr7i…
G
Yeah, if lying requires intent, than the ai has no intent. So whether the ai eve…
ytr_UgzcNmjut…
G
AI isn't gonna take my job until it learns to make fertile soil out of dead sand…
ytc_Ugx1umEzh…
Comment
If A.i gets rewarded somehow the way we do when accomplishing something I think is the only way it can become aware. And I mean chemically the way we do.
youtube
AI Moral Status
2026-03-18T00:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugyqxt5IJdXB9eelCf94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy5yPGi8gAxbasB_-F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz2ErNQXQH8qkVLRQB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwlWgW6XRvwqNIr4Ud4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgwiEmsfSvp8hElW1Vl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxRLP7ffkwSQwPfo7d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyCa9XeQ6kg8kKmZcF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwRBfcWInC8fby0QUx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwzMmUTWxajU8mFlpN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzv7FBttz_OetAoEgB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]