Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's not compute but rather ram/vram that is the bottleneck. You'll need 512GB of Ram at least to run a respectable quant of r1. And it will be slow as hell that way. Like going to lunch after asking a question and coming back to it still not being finished kinda slow. The fastest way would be to have Twelve to Fourteen plus 5090s. But that's way too expensive... Only r1 is worth anything. The other distilled versions are either barely better than the pre-finetuned llms or even slightly worse.
reddit AI Moral Status 1737990609.0 ♥ 9
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionresignation
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_m9ggebz","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_m9gjhci","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_m9gzl42","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"rdc_m9gg0oq","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_m9gn96j","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}]