Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don't understand why people feel the need to demonize AI. The amount of people…
ytc_UgxRU64la…
G
Could it fit in the "comic book" precedent if one was to make an album of fully …
ytc_UgxLf3zDR…
G
In 3D animation tools, an animator’s role involves creating the foundation for t…
ytc_UgzTbg3H8…
G
“Yeah, but your scientists were so preoccupied with whether or not they COULD th…
ytc_UgwfDFDJ3…
G
Artificial intelligence requires massive amounts of metadata, and this data is s…
ytc_UgxnO9FNP…
G
Not really. I can see the edited area around the robot as a slightly blurry area…
ytr_UgxjOfmDf…
G
The accessibility comes in many different forms. Time is money, and people are o…
ytr_Ugw_K-KFY…
G
Nobody says go away to a robot toaster that’s sad very very sad but if you unplu…
ytc_Ugy5ZPH15…
Comment
Spreading fear sells. For the anthropic study (blackmail) conditions were simulated that don't actually occur. Grok4 had a wrong update and interacted with X users. His racism says more about X users than about AI. If ChatGPT urges you to kill yourself, you've tricked it beforehand, and this doesn't work anymore. However, the problem is always in front of the keyboard. We are the monsters they try to learn from.
youtube
AI Moral Status
2025-12-14T03:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzGEME-owwxeX8TR894AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgznmhtnuR2Vrj0Vg7x4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxZvyBqQokrli3vipd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyES6rGbump5oESeOF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwbcBlRiMwoE8xemLl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugznemf5SmoyhHjaZzN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyTKAo0CWeeN3tz6Qd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgyYdqA1srApGgrJMVt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxO4rF2KUydd62ijJR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxFp_B1cne2DptDoMd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]