Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think it’s good practice to program AI such that they can’t actively take part in moral decisions, but only give recommendations. I get that it will br somewhat aligned to a certain moral system. I’m just saying, let’s not give AI the resources and means to act out on what should only be recommendations. This is because we can’t be sure it is alligned with what we would describe as human morals. The trolley problem is one thing, but what if we have 5 sick people who could live if the got the organs of one healthy person. Most people would agree not to overthread the healthy persons right to live even if that means we deny 5 people their right to live. This is because the right to live and right to treatment is different, and introduces even more dilemmas. An AI could choose to take the route that maximises pleasure for most, and kill the healthy person if it values this solution more.
youtube 2025-10-14T15:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugx6u9kSP0q1ErdBU9N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwLPJ0vfZ9SzhLKLr14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzLBeq8d6lIIm5Drgh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw8m4Fl-BbNergL9J54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyKjDp0n6ot9wgllox4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx0gzwIPuSqnbUrgkx4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugwodsn1Jw97eGI_RQt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwtq5RhAZ0N3_Xvhht4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx9Sxklry1S9csY5cN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgyDKZ8UbeHYgEO8TVV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]