Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You're looking at it in the wrong way. AI at it's core is designed to be \*given\* an input, and then \*predict\* the correct output. That isn't really a choice, that's just the nature of "intelligence" in its broadest sense. To achieve this, the AI is given a huge load of inputs, then it guesses the correct output, and then an outside program checks which versions of the AI were closest to the "expected" output. Then using some Darwinian math, the best versions of the AI are essentially "bred" with each other and then these new versions of the AI that get created are put through the test again. Based on that, why wouldn't the first inputs used be text and images?
reddit AI Jobs 1705873684.0
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_kiuvmog","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"rdc_kixlh0s","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"rdc_kiy574r","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_kiw7kk7","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_kithtn1","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"} ]