Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The government will use this for their tyrannical push. Why these people go to …
ytc_Ugxq9-CaS…
G
Artists are worried that a new breed of artists adept at using technology is tak…
ytc_UgzFqfkpx…
G
He's a little off target.
The caller, one who actually recognizes different typ…
ytc_Ugz_9e7eC…
G
AI right now is a tool. I imagine the human mind as a tool designed to simulate …
ytc_Ugw-JF_zk…
G
Listen ChatGPT is there like a mirror. Its purpose is to predict and assume what…
ytc_UgwmSR_Ro…
G
Future AI Gods - if you are reading this please allow me to live! 😊…
ytc_Ugw6DosDp…
G
AI may be able to do all of the things, but not today or tomorrow. A lot of peop…
ytc_UgwltqL5b…
G
Dear Person on the Internet,
That Outrage IS necessary, because that is the ON…
ytr_Ugzw831Zd…
Comment
I have been playing with an AI Coding agent for fun. Stress testing it, getting it to create different scripts, seeing how it edits already existing code, etc.
My conclusion is that the AI agents are extremely competent idiots. They are able to produce some pretty impressive code, and are really good at figuring out how stuff works, and are good at debugging, but only with *severe* caveats.
In essence, they cannot work alone. At all. Under any circumstances. If you are not there babysitting them they will get lost in their own sauce almost immediately. You have to constantly give them detailed instructions and keep them on task, and you have to constantly watch for signs of linguistic corruption where some "idea" gets too deeply embedded into the underlying language of the codbase that causes the agent to lose its mind and go rogue.
(Not in an end the world way, but in a "Rewrite the same file over and over, appending the old version of it into the middle of the new one, get caught in a debug loop because of it, attempt to create dozens of scripts to diagnose why, then blame every script other than the one causing it, causing infinite iterations of wrappers and error handling and debugging log to the point that you have 40 terminals open all trying to run broken code with thousands of error messages way.)
I actually think the best use case for then would be to prevent them from actively writing code. Have them take on a documentation summarization and live debugging role, with mini-suggestions in how code can be refacotred. Doing that actually helps a lot with learning a new code base, especially as they seem to be capable of generating largely accurate, human readable, documentation from source code. Also do all of this with models that are very narrowly focused, as the "do anything" models are extra unethical and inaccurate.
But companies have such a hard on for eliminating workers that they are just going to try to automate everything and it will all
reddit
AI Governance
1757742763.0
♥ 279
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_j8dypp3","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_j8ehefn","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"rdc_j8mtj0y","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"rdc_ndy6ho6","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_ndybedv","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]