Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
1:00:30 ''There's this weird thing... when I talk about A.I. stuff, people say m…
ytc_Ugw0h810W…
G
@cedricdelsol9320AI lets me ask stupid questions at 3 AM. But yeah, you need to…
ytr_UgwfnOOG0…
G
Also another thing with your comment "without a constant stream of human genera…
ytc_Ugy6kh_y2…
G
Really hate how my academic paper on this subject from 2023 is now a major topic…
ytc_UgxGOq-cC…
G
Same… and from someone using AI to do the things I did by hand (websites, apps, …
ytr_UgxToSmdU…
G
No you replace a them with AI they will treat the workers like worthless drones …
ytc_UgyghbmPL…
G
10:42 ...you can sew for that right... even if the law hasn't caught up to the w…
ytc_UgwR1BhzH…
G
When will our governments wake up and regulate AI and Robotics? It’s already too…
ytc_UgxiY5LiJ…
Comment
I think that there's a bad toupee problem, and most people aren't noticing the sites that are mostly made by AI because they don't announce themselves as such. I do have to make SOME high level decisions, but if I tell Claude to follow a best practices document and plan out work in structured documents ahead of actually executing it I get much better results than when I was just prompting. It does keep breaking things, but the pace at which it goes ten steps forward and one step back is worth it IMO. Some of the structured plans it's making are over 2k lines (broken up into smaller files so as to not overwhelm the contexts of agents working on particular tasks). I'm having it work on a project with over 3k source code files right now. I feel like there is a bit of high level understanding that is helpful, but the main barriers of entry right now are being able to navigate a terminal and understanding what kind of context to give it... and it can write a lot of that context itself. My current best practices document was written by Claude and the only real input I gave it is that it should never use git (it previously made modules that resulted in me not commiting all important changes).
youtube
AI Jobs
2025-12-16T02:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugw3Z5Kc-k_VwdeXtbd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwY2euVpQRUl_MDnmp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugzwu7hg2fw2cLo9LqF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwugnyApxPmAivZ61l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwY01MT7UVAGHuSCQd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxLr2LZzXQqsmli1oh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyZrX2pEroJY-RJeVF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw5My7YchftwOrhYkR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwv_RZhWtZ0yxdnvLV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugyp8fmbF7rOfw-wRbZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}
]