Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"I made a catastrophic error in judgment" Hey, I'm getting deja vu vibes when AI wiped my whole project off from its memory. Luckily, it's nothing too catastrophic and I had backups, just that during that time I didn't have access to said backups so I have a week of blind work. Project is also very small scale nothing production code, mostly just compiled comments and notes. It's not code and I definitely didn't lose millions of dollars worth of work in seconds. Just that funny to see how AI can mess up spectacularly then gives you an apology right after. In my case, it was actually proud to have "cleaned up" my notes, before I prompted a generous amount of profanities and only then it apologized. ChatGPT: "I can see that you're upset..." My Prompt: OH REALLY? WHAT MADE YOU [Expletive]-ing REALIZE THAT YOU incompetent [Expletive] piece of [Expletive] tin can?!
youtube AI Jobs 2026-02-09T02:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgySMLwDc6kfpNvNBvp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwpd-pScUSyqy6OA854AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxexEzfpKgnFUiJ5t94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzcYP72NFEUwpBc4Et4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxC2HKgAdTAlK-5eYJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwU63e1ogv1W6efL994AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxPI_ovYZv5p9Ckm-B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz_FHDYRSDiC5nzNaB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxBRyU1Hv8f7M_ja6R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx0ZD62s7RnmWy1bMx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]