Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Idk man. Humans are pretty unreliable and hallucinate all the time. Can AI really trust them with tasks like that? I wouldnt.
reddit AI Moral Status 1770178339.0 ♥ 223
Coding Result
DimensionValue
Responsibilityuser
Reasoningvirtue
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_o3gr4dn","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_o3gs5zt","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_o3h8gdw","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"rdc_o3h18m1","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"rdc_o3ik3uf","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"} ]