Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think we'd need to develop some kind of P2P GPU/CPU sharing. These things need a crazy amount of processing power... for the few seconds they spend calculating your question. Really an ideal cloud computing use case. Your home computer might have enough power to answer 50 questions a day, but at half an hour a question. MAYBE. They run on very specialized GPU-like hardware that's $10,000 a card minimum, not sure if you can use your home GPUs, I know the AI stuff has a lot more RAM.
reddit AI Harm Incident 1681486234.0 ♥ 10
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_jg82r12","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_jg8ni2i","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_jg7wehl","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"rdc_jg81rw2","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},{"id":"rdc_jg87h7y","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}]