Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Art is a method to express your emotions, AI art is a method to express your int…
ytc_UgxEsBPsQ…
G
I tried giving Gemini a role this morning and asked for social copy. Game change…
ytc_UgzsLOMoG…
G
We are ingesting more and more chemicals,and unhuman technology,so now it's bett…
ytc_Ugy37Jdou…
G
The real problem is the automation. But we don't really have any laws to protect…
ytc_Ugwh2UMC7…
G
What I would like to ask is how much confidence would you place in an autonomous…
ytc_UgxMQ5ZZC…
G
"we cant do this for OPEN AI" that tells you all you need to know about sam altm…
ytc_UgyIVcQUQ…
G
I don't understand why this isn't done more. It doesn't seem necessary to chop t…
rdc_deuf4x7
G
How (when) will we know when AI is concious? Before humans create it. To create …
ytc_Ugx3uxHOU…
Comment
The problem even with Opus 4.6 is that 1) it doesn't give an f about convetions (unless explicitly explained), and 2) it never refactors. Both of these lead to the codebase getting more and more and more entangled over time, and ultimately, the LLM cant keep up with all of the hacks it has created, cracks begin to appear and, because the control flow and execution paths are so overly complex, there's no really easy way out except rewriting the whole thing from scratch. And I'm in no way saying that this is a bad process (software development would be iterative anyway, with LLMs or not). But I'm saying that there's simply no way to get rid of the human developer just yet. Except if you're developing something trivial, like a todo app or a simple flight simulator that no-one will ever play.
youtube
AI Jobs
2026-02-08T14:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxyTDzUWxd-iuB8RlV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyzJVQ3LXdxwSOeAAB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwsVLZCEsuftd1bOSN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx8HqC9MjFBs0Wz5lF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxFHzsdu0J1uxNv7Vh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyJNlghgFdOBHTFcdx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy3qiOmab-UcwxR4lR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxJvCF8me8wOVNoAZV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugyq3uw8erjOwOyconZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"frustration"},
{"id":"ytc_Ugz4XNlFd5depzlP0i94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}
]