Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Yup, exactly, same experience here. Any LLM solution I’ve seen - whether designing it myself or seeing the work of my peers - has failed spectacularly. This tech crumbles when faced with real, back office business problems. People seem to forget that we’re working with a probabilistic, hallucination prone text predictor, not the digital manifestation of a human-like super intelligence. Arguably worse than the masses of people deluded into believing they’re witnessing reasoning is the massive crowd of LLM cultists who are convinced they’ve become machine whisperers. The “skill issue” crowd genuinely thinks that finding semi-reliable derivations of “commands” fed into an LLM qualify as some sort of mastery over the technology. It’s a race to the fucking bottom. More people need to read “The Illusion of Thinking” by the Apple team.
reddit AI Responsibility 1755580324.0 ♥ 215
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_n9hirzi","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"rdc_n9i7k14","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_n9ic0xu","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"rdc_n9h9ui0","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"rdc_n9irgxw","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"} ]