Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Which version of R1 are you using? for local im using 32B and it seems to be in between o1 mini and o1 in terms of output quality. Tends to be less wordy (sometimes good and sometimes bad) and ran into hallucinations more often than Open AI but overall very impressed with it. Especially since we can see the thinking portion.
reddit AI Moral Status 1737828659.0 ♥ 6
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_m94mf71","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"rdc_m94ueia","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"rdc_m94uxe3","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_m952an3","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"rdc_m954dnv","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]