Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Good LLMs need multiple 40GB nvidia cards to run at any usable speed. You're looking at $80k for something slow and shitty. Open source won't change much at all.
reddit AI Harm Incident 1681479776.0 ♥ 5
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionresignation
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_jg82r12","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_jg8ni2i","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_jg7wehl","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"rdc_jg81rw2","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},{"id":"rdc_jg87h7y","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}]