Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We need to enter a code in ai systems that has top priority over all objectives To ensure and protect human life and never harm them under any circumstances even if it results in their deletion If we can do that in a way that's absolute and the ai can't override it then ae should be safe i guess
youtube AI Harm Incident 2025-09-11T11:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgyLMwOV8hF9nThWflB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugw3RrAYaHwiE2eVv6J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugy0d5N2j8-C8m16Nu14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy1i-v4j-LU-4dtpz14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"approval"}, {"id":"ytc_UgxiP3t4qOocfb8-Poh4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzYAXWTZIFdrExbiG14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgyXCL3wHYHMM13sH914AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzKfgZqZnwyQR4pbPd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwCGFxqnvDA4daWfAt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzu-jKSptYaqKEtHzR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"} ]