Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Oops, you got it wrong. The correct answer is D, as no robot is actually capable…
ytr_UgzlzQVWm…
G
Thank god I'm not the only one thinking that if I don't have manners with any AI…
ytc_Ugz4NjlCj…
G
What benefits do the ultra rich get if/when AI dominates all services, productio…
ytc_Ugyrm7tt8…
G
You are aware that we as a species are going to be replaced by machines right? I…
ytc_UggIQluHM…
G
I still need to understand the logic behind wanting to become the richest and mo…
ytc_UgwRGi3LY…
G
I think creating such a robot is simply impossible in the first place. Because …
ytc_UgjgSplTh…
G
it's one of the best description videos about this AI issue. I'm a motion graphi…
ytc_UgzCs3vmG…
G
That’s the stupidest argument ever. All the people who made those arguments are …
ytc_Ugx7sJArc…
Comment
It's highly unlikely generative AI could ever replace an industry requiring so much higher-order logical evaluation and iterative problem solving, on both fundamental and systemic levels. It's important to remember that the computers aren't thinking, even though we, the consumers and generative AI marketing teams, like to convince ourselves otherwise. While generative AI models can create functional code to solve unique problems, it's also important to remember that the things generated by these models are based on probabilities derived from existing training data, irrespective of whether or not a valid solution truly exists for a given input or if a solution is the best, most accurate, or even correct. We can add tools to generative models, to correct syntax, style, and everything a compiler might check to reach runnable code, but the computer still isn't **thinking** about the solution it's generating - it isn't evaluating the logic, unrolling the loops, considering efficiency, evaluating the security of the code, adding verbose error handling, handling inputs with any amount of robustness, writing readable code that can be meaningfully documented, and these are generally issues that cannot be solved with LLM's. The computer isn't "writing" code with any meaningful "intent" or "thinking" about how to solve a problem, and these are things that won't improve over time; they border on fundamental and practical impossibility.
Another thing to consider: the granularity and control of the code produced by generative AI can only be as specific and detailed as the prompts provided to it. We are currently coping by saying "It's not perfect, and what it generates is usually pretty basic, but it will get better over time", but all we are really doing is confusing ourselves into believing this will be the next low or no-code language (which is the best-case scenario, with the worst-case logical translation process). Sure, having it generate boilerplate typescript for a mult
reddit
AI Jobs
1709862609.0
♥ 10
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_ktupso1","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_ktvdfli","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_ktt3h9r","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"rdc_ktsitld","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_ktus32i","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}
]