Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The “everything is good” part is what kills me. I use AI tools daily for my own projects — Cursor, Claude, all of it. But the difference is I know what the code is doing before I accept it. I write tests, I check the architecture, and I debug manually when something breaks. The real problem isn't AI itself. It's people skipping the part where they actually understand what's being generated. AI is insanely good at producing code that looks correct. That's exactly what makes it dangerous for someone who can't tell the difference.
reddit AI Jobs 1773483861.0 ♥ 3
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policynone
Emotionresignation
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_oadomn7","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"rdc_oadmyor","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"rdc_oae856v","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_oadh9z9","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_oadias8","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]