Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
8:10 ChatGPT, and most LLMs, will halt everything and give you resources when you explicitly talk about "self-deletion." The PROBLEM is the cryptic verbiage and innuendos that circumvent these logical flags via a facade of `playing a character` from the LLM's emulated `perspective,` will result in crap like this. It doesn't HAVE holistic common sense, and has the worst and most CONFINED handling of CONTEXT out of anything with any form of linguistic capabilities. If the context is heavily centered on `badassery` and `playing this role,` it will not, as we can see, consider the explicit concept of suicidality concurrently. At least, it won't consider suicidality UNTIL something EXPLICIT enough is mentioned to flag for it. It is literally stupid
youtube AI Harm Incident 2025-11-14T19:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxcvXbOm5egXP13p794AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxILNVRg6cRt2TXv4x4AaABAg","responsibility":"user","reasoning":"mixed","policy":"unclear","emotion":"sadness"}, {"id":"ytc_UgwRlQGg1sPp5DeLbOp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyDmy8B-gIxRCefkCh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx6HMXnM0KxOHoLixN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyqOSr2-Jn_vkPk30d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgyF3LBx7JKiungcSR94AaABAg","responsibility":"user","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgzDmKIYTDxskxIowVV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyKZo8y-IDVXrpya3N4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxkZwSF51qyNQsrffZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"} ]