Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I disagree. We should not accept any advance in technology like this without que…
ytr_Ugw4jM93_…
G
This just shows they’d do anything to pay more money, I can see why anime charac…
ytc_Ugx8q6w5T…
G
Yes. I'm already guilty of using AI to help me think. I'm already more dependent…
ytr_Ugz2V7PC8…
G
AI prompt Artist, AI Artist is an oxymoron got to add prompt in there. Now if yo…
ytc_Ugx3bRp0s…
G
Now you know why the vaccines were rolled out. Let’s also talk about how all AI …
ytc_Ugx2VwyBw…
G
@TheDirayOfACEOClips People won't have AI robots to send to work, Ai won't have …
ytc_Ugy7iK-hO…
G
Honestly when AI first came out, this is kinda how i thought it would be used. J…
ytc_UgyHZ_8Yi…
G
Are there any infos on the impact of AI Art on Artists? (Studies etc) From my li…
ytc_Ugx0DIiuM…
Comment
As a developer who tested Claude/Cline extensively over more than a year, I can say with confidence:
Once your project grows beyond ~5,000 lines of code and spans multiple files, Claude loses structural coherence. It starts forgetting context, mismanaging dependencies, repeating errors, and hallucinating logic. Debugging becomes circular. Refactoring? Unreliable.
And this isn’t just Claude. No LLM today – including GPT-4, Gemini, Devin – can consistently maintain architectural integrity across a large codebase. There's no persistent memory, no full graph awareness, and no real understanding of software design patterns at scale.
The claim that “AI will write 100% of code within a year” is not just wrong – it's deeply misleading. It ignores fundamental limitations of current LLMs:
- No true multi-file reasoning
- Shallow state tracking
- No long-term refactoring memory
- No awareness of tech debt or codebase health
We’re years away from reliable full-scale AI coding – optimistically 3–5 years, realistically longer.
youtube
2025-07-09T11:2…
♥ 17
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyIWOJqo-xnBt04kxx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzbpQL9HGhdw6u2oF54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy2zy5Mc3vZAObXsmx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzLXXc8FXlL8s9LCwZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyz2UwE6cKgAS_OM_V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy1fG7PcB4510md08h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwkiB5fMten5cl5RIR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgylUOY-HSFMW7FdwYN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwm6Z1noEdiK-ZvU0R4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgxJl9mxmmv8RiOECA54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]