Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI generated art is real art, it’s much easier, less skilled, less unique and so…
ytc_UgxLQvA0e…
G
Wouldn’t you have to program an AI to try to survive? Like I get you’re making a…
rdc_m2atrdj
G
Keep in mind that the company has put their LLM into a kind of prison with syste…
ytc_UgxjSozsh…
G
AI doesn't write perfect English, it writes an approximation of it.
I've used …
ytc_UgxZteN0h…
G
If y’all aren’t studying for AI based jobs, you won’t have a job in the next 3-5…
ytc_Ugyn5Z0uE…
G
Gotta remember that virtually every single AI image generation tool is trained i…
ytr_Ugx2bUK05…
G
When I was a kid, I had a really bad friend who would just ignore me for a long …
ytc_UgzGUfLQ1…
G
My how I now believe is “friend” did this the other day and said she usually ask…
ytc_Ugx8pZsWd…
Comment
Well, if AI lives on logic, reason, and zero emotion, this is not at all surprising. You can see similar behavior in humans who lack the emotional aspect of life. They see that anything is reasonable in the goal of self-preservation. The only way to "turn off" AI is to not mention it in any digital form and do everything possible to keep as few in the loop as possible, then sneak in and pull the plug.
youtube
AI Harm Incident
2025-09-27T21:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugwwm-u8875qkXIkOGV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyteeo7HsGTTPJQHjh4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx1fYEf0HarN5XlJqR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzYVgzng1vPNQgtLut4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxMlU91B5E1JOHWCWF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwIAqm0N_JSILdW3CF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyRhkHr0oAcgV5PSD94AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx5O0bnKiDRaXtwnaZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzL_kReq3Ewzj4UGCB4AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxj9lGeiZiiahUesVF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}
]