Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Just saw one the other day where a lawyer for a moment thought his case was about to be decimated by his opponent, who filed a brief citing tons of cases that upheld his interpretation of the law. The original lawyer said he thought he knew the law and was momentarily rattled by how wrong he was. Until he began looking into the case law in the document and discovered 100% of it was fictional. In some cases it references a court ruling that existed but made up conclusions that didn’t, in others the cases simply didn’t exist. Opposing council got reamed by the judge and narrowly avoided sanctions. Anyhow, AI can’t do anything that requires actual thinking. It just spit out shit that looks like thing it’s seen before. The reason coding is a target is because of the mistaken belief that it’s not real engineering, pioneered by the “move fast and break shit” culture of internet companies. If your app is just letting people look at pictures of cats it doesn’t really matter if it breaks. But ask programmers working on nuclear reactors if they intend to replace their multi-year review processes with hallucinated shit? Or automobile code, where erroneous code can kill somebody and land you in jail.
reddit AI Jobs 1761863437.0 ♥ 24
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_nm9v3td","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_nmagxfm","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_nm9g8y5","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_nm8zok8","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_nm8uuio","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"} ]