Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
>These models are capable of reasoning through unique cases at my work. They really can't, they just apply patterns they were learned on. I work as a developer, half the time the code agent gets stuck in a loop doing and undoing things, or suggest improvements that would break it's previous work. It has no understanding of the larger codebase. They are not the chatbots from late 2000 because these were just chained ifs and cases, but people who think LLMs actually reason have very low standards for what they consider reasoning. A simple check, grab a chess board image, move around the pieces on the initial setup then ask the LLM to describe an opening. It will identify the layout is fucked up, and still move the knight as it was a bishop or a pawn. LLMs don't understand abstract concepts, cannot grasp a concept such as a piece in a game behaving differently to other, it only understand that in chess to win you usually have to start the game moving one thing into another predefined position.
reddit AI Moral Status 1750956973.0 ♥ 34
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_mzwiulz","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_mzwvfsp","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_mzwnged","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"rdc_mzwwku6","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"rdc_mzxm3nm","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]