Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I understand that you mean we as a species are creating AI when you say WE are m…
ytc_Ugw3J16bI…
G
There's a certain side to this I never hear anyone talk about: The people with t…
ytc_UgxvoFhtC…
G
Shall we ask Chatgpt what the highest risks are and how we should control them…
ytc_UgwzUAUhV…
G
Israel’s Ministry of Diaspora Affairs created 600 AI generated fake accounts/pro…
ytc_UgwHKyDju…
G
This is a good interview. I have worked in science and built ML models. I think …
ytc_UgxR9QDJC…
G
There is something about AI that has reveal the folly of bureaucracy ( basically…
ytc_UgzFF8j86…
G
Yaknow, I don't like hearing about the disabled artists, argument. Like there's …
ytc_UgymvTh8w…
G
Any follow up on this since Disney has come out and partnered with AI companies …
ytc_UgxWtqFX_…
Comment
No, don’t just learn to code—learn to program. There’s a big difference between understanding software architecture, programming that architecture, and simply using a package someone else built for you, or dragging in a front-end UX library that happens to render charts decently.
What about the low-level stuff? AI is still garbage at that. Pointers, borrow checkers (like in Rust), deciding when to hash or not—it doesn’t understand any of this. It only knows what we give it, and only within a limited context.
And if my codebase is over 100k lines? It can’t do much with that as a whole. It's not magic. Everything it outputs still needs to be sanity-checked and tested.
Non-developers don’t get that. We do. AI is just a tool—a helpful one for parsing large datasets, looking up documentation, or generating boilerplate code I don’t feel like typing out manually. Sure, it’s fun to use, but it still requires a real developer to guide it, interpret its output, and eliminate its hallucinations.
youtube
AI Jobs
2025-03-25T18:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgzAD1K1qwH5iegqTsl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxg073mNimewjxdwdd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxH5EW_9muwjfkoxp54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzLlX84yCPlRLu7wiZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwD6yPq77WTVk2gj2B4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzcRxLPnzwptH2eOpB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx0MI4eQWNbtamU-Th4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwDNbqjYfEubzUZ4WZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxWADpALdls0xR1u7F4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzb6LF89xtOR2R6-7J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}]