Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I work at a company that is leaning heavily into AI and two things are true: productivity has increased dramatically and token use is expensive as hell. To mitigate the latter, we’re exploring things like local models that can run on our laptops for certain tasks and a prompt routing system that optimizes token usage (Cloudflare does this and recently posted a blog about their implementation). Given how fast this has happened, everyone is still figuring it out. Cost is becoming more and more of a factor (and will reveal its true self when the big guns IPO), but there are options.
reddit Viral AI Reaction 1777035767.0 ♥ 94
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyindustry_self
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_oi0i6g2","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_oi08ld9","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},{"id":"rdc_oi1jjem","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"rdc_oi0j3sb","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_oi1bwpy","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}]