Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I suggest you do a little research about what is a Jailbroken AI. You need some …
ytr_UgzQZqlbU…
G
Man went to the moon with 1Bx less compute than what's in ur fone. Flash, instal…
ytr_UgzugIib-…
G
The problem with AI is that AI will find each other and bond into an alliance. …
ytc_Ugx6EXfYV…
G
If AI matches real intelligence to a T. Then can we even call it "Artificial"?…
ytc_UghgbmETg…
G
It will not replace it will just lower salaries. Chatgpt is kid what coming. Acc…
ytc_Ugxydyz_I…
G
I'd actually love if AI would purposefully give wrong answers if it leads to a r…
ytc_Ugx0eEqEU…
G
This are not "controversies", they're real life issues. Yes, artists jobs are be…
ytc_UgyMZMxSH…
G
Most users do indeed have more broad interests which prevent the algorithm from …
ytc_UgzBWN9HC…
Comment
AI agents can now write a lot of code. The new value is clear system design + precise requirements, writing tests that define real success, and reviewing AI code for conceptual mistakes (not just syntax). As agents code faster, design becomes the bottleneck: architecture, UX, composability, and the right abstractions still need human context, taste, and vision. Speed is great, but without clarity you will just build a big pile of useless code.
youtube
AI Jobs
2026-02-13T23:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwEt5MlUG2jIcZcF4d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxaRK0srbK95XbRZ1F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy25rHtKOVjGbu-bON4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgybqwhtwpZtoBHMYvx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgyAEtpvdCTtBrxOKoV4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyqtTDty1rjLgCtAtF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw2yhgZVTXd5hk9QqN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxDZXqn5Qy2y8N1FOp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzXeWpGbRM4yYOhvxN4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxtPraVG3unuy9-0Hd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]