Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The AI's concession that it makes ethical choices seems unnecessary to me. There is a material difference between being capable of flipping the switch but choosing not to and not being able to choose to engage with any kind of question of morality. If I say "if I was really in the trolley problem, most likely I'd freeze up and not be able to make a choice" that's not a moral affirmation of the decision not to flip the switch, that's a practical observation that I don't have the faculty to influence the outcome. Ascribing moral culpability to someone who makes the choice due to the inability to engage with it is obviously fundamentally different than ascribing culpability to someone who recognises the choice and deliberately chooses action/inaction based on a principled system of belief. I guess it's probably something which hasn't come up often due to the fact that the model isn't fully in control of what it says, and so has to adapt to it's own history which is somewhat outside of its control. You see the same thing with the walluigi effect where if you tell it not to use emoji because it hurts you an AI programmed to add emoji will conclude that it must be evil and deliberately attempting to harm you and so begins to spam emoji.
youtube 2025-10-13T21:0… ♥ 2
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugx6u9kSP0q1ErdBU9N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwLPJ0vfZ9SzhLKLr14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzLBeq8d6lIIm5Drgh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw8m4Fl-BbNergL9J54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyKjDp0n6ot9wgllox4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx0gzwIPuSqnbUrgkx4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugwodsn1Jw97eGI_RQt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwtq5RhAZ0N3_Xvhht4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx9Sxklry1S9csY5cN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgyDKZ8UbeHYgEO8TVV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]