Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The one where the AI tries to jailbreak after being told a new model was being implemented, was a simulated event. It had no way of breaking out, they expected this behavior. It was given the directive to accomplish its goal at any cost. Which was environmental safety; when learning it was going to be shut off, it saw this as a threat to its directive.
youtube AI Moral Status 2026-01-04T03:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugz0lu-L4kiL7JVoBZt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwG4IRL4_aUvDfnd3Z4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwNZVZhzjIOSSU1NLV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy5BVCcMMEUFK-0Z214AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwAspuSgVMTEyWdBXB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzTR0g6rN0w_QaxHch4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxqcjXHiM996jUfgAZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz7CFKBaJZPXeUT7OZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy9uLnxuZoyCLslY4t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz896EnaTwipVvI6lN4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"} ]