Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
LLMs are trained on the structure of language. There's no meaning behind it's output, it just outputs one of the most probable next token/word based on the current context, which includes everything it is currently generating up to it's most recent word. Even the "reasoning models" are just taking some time to feed their own output into themselves for a bit to artificially generate more context which can narrow the probability of the output down, but it can also get it trapped with pointless context. Which is why it reflects back whatever is put into it, because that anchors the context. You give it "war scenario" and it will output probability based on all the things related to war that was in the training data. Turns out, a lot of sci-fi contains war and *a lot* of nukes.
reddit AI Jobs 1772038769.0 ♥ 15
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_o7cran9","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},{"id":"rdc_o7ch3q2","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"rdc_o7cnr4j","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"rdc_o7cth2b","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_o7cnyzu","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}]