Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Those aren't R1. The 1.5B and the 7B are finetunes with R1 data but based on Qwen Math if I'm not mistaken, so they are for mathematics, and the other two are based on normal Qwen models as well. The Actual R1 that is usually being said as as good or better than ChatGPT is the 600B+ model...
reddit AI Moral Status 1737833698.0 ♥ 11
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_m967qai","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_m94ba1f","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"rdc_m95dgnh","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},{"id":"rdc_m953yyo","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_m94f9d2","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}]