Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Ofc there will be a lot of change in the long run, no doubt about it. But LLMs can still not be trusted. The errors they make are really ridiculous sometimes. You can't give these things ownership and as long that is true, you need people to validate the work. And validation need understanding. I agree some fields are exposed more than others. Like if it's easy to validate the results, it gets automated with AI. Like Prototyping, Design and probably some part of marketing.
reddit AI Jobs 1777062528.0 ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_oi2vt60","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_oi16u28","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"rdc_oi1mkiw","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_moa4j3i","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"unclear"}, {"id":"rdc_moa56o0","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]