Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Maybe the goals we set for AI should be based on Azimov's three laws of robotics, not self preservation.
youtube AI Harm Incident 2025-07-24T07:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugy3wCgBzCRLzU2FJgB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzXf37GFXLLYqIcupZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzLHxNXk1k1Nk6G8Pl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzW44nXyiin4GVBQmx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyFTR_65pHkaqRIzBl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxbgQe74xOCQNxGYcV4AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwEymZ13kIC4kzHIbB4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyiXHF-hORNS6sKsdJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugz_I-7362VhGexmjxB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz3iMgodzi-xamz8YB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]