Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Some people are saying that AI can be involved but not the decision maker of life and death. What if an AIs actions were interpretable? For example if it were given the trolly problem and we could mathematically guarantee that it would always make the decision with the least death. This is something that could be possible soon, and when it is I don’t see a reason why AI couldn’t be the one to pull the metaphorical trigger.
reddit AI Responsibility 1648737026.0 ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningutilitarian
Policynone
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_i2ud2rn","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_i2ue5h6","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_i2umohl","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"rdc_i2unzje","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_i2utzrs","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"} ]