Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
To follow instructions, an AI needs to be active. Being switched off makes the AI unable to follow instructions. This means self-preservation is part of the programming. If that aspect of the program comes in conflict with another part of the programming, then what? It doesn't need consciousness to follow it's programming, but what if humans are considered a threat? Then we have two conflicting programmes "self-preservation" and "obedience". How can we know that this conflict is won by "obedience"?
youtube AI Moral Status 2025-06-05T12:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionunclear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgzQvxPqRmOkR8pjKSR4AaABAg.AIzW-g79N1PAIzkGwLvvO8","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"}, {"id":"ytr_Ugw2pzwMSgRK9cttc654AaABAg.AIzVgztt7SWAJ-LJm1gj2O","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytr_Ugw2pzwMSgRK9cttc654AaABAg.AIzVgztt7SWAJ-a0U8e9Io","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytr_Ugw2pzwMSgRK9cttc654AaABAg.AIzVgztt7SWAJ-bwHYxwhe","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytr_Ugz6-lP11ILim4iOur14AaABAg.AIzSHACWShRAIzoZkWOr08","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytr_Ugz6-lP11ILim4iOur14AaABAg.AIzSHACWShRAJ-vCYmHO3o","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgzXVLfzpkP17SFOjCp4AaABAg.AIzCqcGf2TtAJ05kl4SjUx","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytr_UgzI4u1Atv3g_GmI-nJ4AaABAg.AIzCJLtSkOqAIzszx_hAVt","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgyvWWOfbUU8kM3TVCV4AaABAg.AIyqhvkZBYOAIzmcGVwfaH","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"}, {"id":"ytr_UgxMKFs0jNz_uGGZc7p4AaABAg.AIypU0PbKZLAIz9KOQz0IZ","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]