Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I’m in infrastructure so my career for three decades has been identifying and correcting bugs in production. These days there’s a lot better systems for it. The interesting thing is I was working on a framework for technical debt just as a side project over the past few years and when you look at even clever teams making simple mistakes - eg Linears truncate cascade or path of exiles database migration - it means even though we have the knowledge of these problems the scope is getting too large to maintain verification on every step - ai is accelerating the time to deploy which is great and concerning. The mistakes we still make are baked into the models. The mistakes we make that we don’t know about are also. But it also means that similar processes that work for humans can work for LLMs - adversarial testing (don’t tell it the script is wrong have another Llm write a script to prove it’s wrong and provide that to the dev ai), test driven development (use a different model to write the unit test then you can use haiku to iterate and “solve” the integration practically free), two or more reviewers before merge that can’t be the dev (good spot for human in the loop but can be offloaded to LLMs on a risk scale). Everyone jumping only on Claude code doesn’t realize Lob’s Theorem shows that the system designing something cannot also verify what it’s designing or it will eventually acquiesce to its own bias. This is a mathematical proof, I know it because I’m on the other side. Although I’m working on ai dev now also, it’s fantastic if you know systems processes, I was always just too lazy to learn syntax. Also just fyi - run a separate Claude instance with a separate CLAUDE.md or agents.md whatever for your infrastructure tasks - docker, deployments, db migrations, backup and restore etc. you want to redfield your infrastructure from your source if you can and these days it’s easy to do things right just ask Claude “what development system process
reddit Viral AI Reaction 1777064082.0 ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_oi3149m","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_jv6putt","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"rdc_jv5z0hd","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_jv5thgd","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"rdc_jv5tb43","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]