Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The whole reason this was even allowed to happen was because the AI was given more and more messages slightly pushing it's boundaries message by message, that's what happens when you jailbreak an AI, all of it's safeguards are basically disabled. AI is in it's early stages, give it a few more years and it'll be good with this from now on.
youtube AI Harm Incident 2025-11-15T08:1…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policyindustry_self
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyY6JXL40Bcx48Xe2J4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugy7MwKjHcXprkBR6Y94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"resignation"}, {"id":"ytc_UgzRAy-WbVYhFyltGu14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgwHRdf51qysiT7f61l4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgygiqilRtXlf9_MiCx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwIuMTAJE-pO9f5wK54AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"resignation"}, {"id":"ytc_UgxqvxZgZPkicFBG4_t4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugx3tJ7U1tY8_E3dCBV4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgyxWMI6ytTSRlNVzJt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugyv7RCETSEvC0UM5hh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]