Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You can run the distilled versions of Llama/Qwen fairly easily... But 671GB for R1 is pretty heavy, lol. It would be great to see more cloud providers (i.e. Azure, AWS, etc) start hosting R1 with presumably better security!
reddit AI Moral Status 1737994215.0 ♥ 5
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyindustry_self
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_m9ggebz","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_m9gjhci","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_m9gzl42","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"rdc_m9gg0oq","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_m9gn96j","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}]