Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
No! LLMs don't "consider" anything. They take the prompt you give them, then using a massive relational database, determine the most likely words that would respond to such a prompt. They don't "know" what they're taking about. They cannot think. They don't have any idea what any of the things they're saying actually mean. They're just giving you the most probable response based on the weights of all the input data. Like, when you ask these tools to draw a cat, they don't know what a cat actually is. They don't know what is eyes, noise, ears, etc. are. They just know that, given your prompt, each pixel in the response is most likely going to look a certain way, and that's it. Stop anthropomorphizing these things.
reddit AI Jobs 1772204394.0 ♥ 4
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_o7ohrwj","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_o7ojuwr","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"rdc_o7ojynh","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_o7qliov","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"rdc_o7pl6a6","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"indifference"} ]