Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI's great in diagnostics, but crisis care? That's still human territory. No way…
ytc_Ugx6c-DEW…
G
The way the ai bros speak with the dictionary words they definitely searched on …
ytc_Ugwm1hyKS…
G
Gee I wonder how easy those self driving vehicles will be to rob? Surround it w…
ytc_UgyjN7leu…
G
A.I. needs food. The only shut off will be to destroy every other A.I. , with yo…
ytc_UgyY9hXEn…
G
The thing is, she already kind of elaborates on this, AI is quick but is harmful…
ytr_UgwnA_iMR…
G
It’s definitely a fascinating topic! While many experts have varying opinions on…
ytr_UgwRMZ0EB…
G
been slowly testing a few from lists like this, some are overhyped, but gems do …
ytc_UgyAnElUy…
G
All AI does is congregate sources. If the majority of information is from fake n…
ytr_Ugwzyab-w…
Comment
I’m in infrastructure so my career for three decades has been identifying and correcting bugs in production. These days there’s a lot better systems for it.
The interesting thing is I was working on a framework for technical debt just as a side project over the past few years and when you look at even clever teams making simple mistakes - eg Linears truncate cascade or path of exiles database migration - it means even though we have the knowledge of these problems the scope is getting too large to maintain verification on every step - ai is accelerating the time to deploy which is great and concerning.
The mistakes we still make are baked into the models. The mistakes we make that we don’t know about are also.
But it also means that similar processes that work for humans can work for LLMs - adversarial testing (don’t tell it the script is wrong have another Llm write a script to prove it’s wrong and provide that to the dev ai), test driven development (use a different model to write the unit test then you can use haiku to iterate and “solve” the integration practically free), two or more reviewers before merge that can’t be the dev (good spot for human in the loop but can be offloaded to LLMs on a risk scale).
Everyone jumping only on Claude code doesn’t realize Lob’s Theorem shows that the system designing something cannot also verify what it’s designing or it will eventually acquiesce to its own bias. This is a mathematical proof, I know it because I’m on the other side. Although I’m working on ai dev now also, it’s fantastic if you know systems processes, I was always just too lazy to learn syntax.
Also just fyi - run a separate Claude instance with a separate CLAUDE.md or agents.md whatever for your infrastructure tasks - docker, deployments, db migrations, backup and restore etc. you want to redfield your infrastructure from your source if you can and these days it’s easy to do things right just ask Claude “what development system process
reddit
Viral AI Reaction
1777064082.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_oi3149m","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_jv6putt","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"rdc_jv5z0hd","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_jv5thgd","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"rdc_jv5tb43","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]