Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"AI consistently chose harm over failure" well yeah, they were given the task "do whatever it takes"... And it did exactly as it was told... You frame this as if it was thinking on it's own and came to that conclusion. Given the situation and the task it was given, it was the most effective way to achieve it's task.
youtube AI Harm Incident 2025-09-10T23:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzaJH83UyX6B1d9qQR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgylVvmPnRYJHEj0ayV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwcsq1MaUtnZBXXwrJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugzc2L0QRjU-lOScfl54AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy3XzoWAA9qK51BNNR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwXTuq5NWPgyNqRfMh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugz3DeFrsVTsQnFbidZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzJVCZenruiOHye_s94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxYDHtMJrAYGmuJUPV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwtsN3mwuvmsuNSzfd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"} ]