Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Why do you think that's an intelligent conclusion? Because you think so yourself, and consider yourself intelligent? The idea that a "skynet-like" AI will impose a sort of moral codex on itself based on human ideologies doesn't seem based on anything but fear.
reddit AI Moral Status 1597010328.0 ♥ 3
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_g0z7sge","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_g0z95rq","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"rdc_g103p7j","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_g13w6u3","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"rdc_g0xm55u","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]