Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There's no such thing as AI, it's a manufactured misnomer and if you're competen…
ytc_UgyN2wRnL…
G
ai NEEDS to be heavily regulated right now, it is still way too early and there …
ytc_UgwiFMeLf…
G
Surely the end goal must be for the world to be a eutopia? if not the thought of…
ytc_Ugx8-qczf…
G
Thank you for your comment. The Übermensch only appears in the allegorical and f…
rdc_cxndaeo
G
People gotta realize that AI is like a baby (or mirror): they act based on what …
ytc_UgwPWBa2E…
G
few more years and the human race is doomed, why try to make a relationship work…
ytc_UgxSZ3a3O…
G
While i understand and even agree to your concerns about AI and how it's used, a…
ytc_UgycVl9LK…
G
the problem with AI is mostly how ridiculously harmful it is to the environment.…
ytr_UgzAsQ2NS…
Comment
Marx talks about this in Capital. Machinery in the Industrial Revolution wasn't used to automate jobs. It was mainly used to lower the skill needed for the work so that women and children could do the jobs. It also resulted in a lengthening of the working day. So instead of needing to carry things all day, you now had machines that did that for you. So now you didn't have to deal with physical exhaustion, so you could work longer hours.
This has *some* similarity to what will happen with AI in software engineering, but not a ton. There's not a lot that AI can completely automate for you with coding, primarily because you just can't trust it.
LLMs are an untestable black box. Sure, the LLM can quote specific parts of a PDF or search the web. But it can misinterpret those results, randomly bork unexpectedly on certain inputs, or just hallucinate completely. We have lots of wonderful constructed benchmarks that evaluate various metrics. But these metrics are hyper-specific to those specific benchmarks, because performance varies widely depending on input, temperature, prompt, and seed. These things are *huge* and you objectively cannot reason about them the same way you can a complex software system. The testing space is infinitely higher. So smart companies will use this to make coding just a little bit faster, but won't be replacing engineers anytime soon. You need someone to blame / fire when things go wrong.
reddit
AI Jobs
1712790751.0
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_kyzvk74","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_kyz7a6m","responsibility":"company","reasoning":"mixed","policy":"industry_self","emotion":"approval"},
{"id":"rdc_kyzat7q","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"rdc_kyz7vbe","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_kz0agjt","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]