Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I agree. Also, and this is key, once the AI makes a motion, an associate still n…
rdc_n5gpc17
G
In case any ai is watching this I always said please and thank you to AI chats…
ytc_UgxKb6C9H…
G
Im not an artist but I’ve seen a lot of believable ai propaganda and want to hel…
ytc_Ugz-DGom-…
G
I agree. The Devil has disguised himself as your Internet Helper. He participate…
ytc_UgwX6AKzs…
G
There are some people that lose their shit if you use AI purely for fun. They ge…
ytc_UgxLD5e-U…
G
That's not the only reason, friend. Some Chinese models have been outed as resu…
ytr_UgyfPdsyt…
G
Remember the “ChatGPT show me a Hamburger” video? The A.I mistaken the Slice of …
ytc_UgzBhjOuv…
G
All the writers and actors who are on strike from your jobs because someone is h…
ytc_UgwVMAbF0…
Comment
Ehhhhh I dunno about this guy. He says "oh yeah AI can totally tell what is an isn't true, we just can't get it to express that to us" and then moves on without providing any evidence whatsoever. I dunno dude, if it lies constantly and we can't get it to recognize a lie and stop, it seems like our default conclusion should be that it doesn't know what truth is.
Ten minutes later he says, without any apparent self-reflection, "-we have no idea what's going on inside AIs. A because we can't see what's going on inside there, we can imagine that it's whatever we want." He also mentions the classic "it's really hard to convince someone of something when their salary depends on them not understanding it." How about someone whose book residuals and podcast appearances depend on them not understanding something?
youtube
AI Moral Status
2025-11-03T17:4…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxs4e2UNdweIXZcscJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz1a7i9Y0bJagEdERZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz2uMrP8Bmv3J1qRBR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwsL5oUEYvqk1uyj4R4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzq0KOoim73dCntkdh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzsafBd5FfFmH9EZSB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxjJ8DZEUtc3q1DQ-B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx4IGU5QzByFqxVLt14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxaO3DME9TsmKePwPV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzFY7J9QxhTDcPdPX14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"skepticism"}
]