Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Ur CPU is the bottleneck. I casually run 10 agents with say 2 subagents each, no problem on a shitty PC — but I'm on Linux. 30 agents no subs is my max. And 1 agent with 50 subagents is another max. Clocks around 80% on 3.80GHz and 16GB RAM. 12 cores. If u have a mote power machine, a local llm would be nice. I will setup for local llms in futur, when i upgrade or just get a tiny model to test with. I use subscription models with no issue.
reddit Viral AI Reaction 1776961015.0 ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_ohrxxj5","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_oht18qn","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"rdc_ohy73aw","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"rdc_ohudqfd","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_ohufw67","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}]