Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Because mediocre people keep pushing it to avoid coming to terms with the fact t…
ytr_UgxmrV94e…
G
I really like the point you made about all artists being subtly influenced by th…
ytc_Ugw5bypdr…
G
Giving a hat to a robot from 6 feet away making it look so ez…
ytc_UgyJLlc8C…
G
According to a bunch of examples what Chinese government took advantage of cutti…
ytc_UgxqAfh2r…
G
Once A.I. is within a Drone with a solar power docking station to Land on , Sto…
ytc_UgxZ7bcXM…
G
@ferongr Never said it was, but people who do are getting a service for free. An…
ytr_UgzAPpAju…
G
@Alienmikuloveuwithout money it wouldn't cost money to keep the machines runnin…
ytr_UgwPM_zel…
G
Artist: Spends hours making a detailed piece then posts it
AI: Spends 2 seconds …
ytc_UgwT6L-3M…
Comment
I’ve found the best AI’s (sonnet 4.5 etc) is actually useful and doesn’t just churn out stuff that isn’t thought through and will cause many issues later. However, this is only because I give it a very strict set of requests that I only know from being a dev for 25+ years. I make sure it only handles a small function at a time and that function is testable. If you give it too much freedom then it makes mistakes. The problem is that the mistakes don’t look like mistakes as the syntax is correct and it mostly runs; the mistakes are things that it doesn’t know even from a massive context; eg: this with system will fail if x,y or z happens. But as it passes tests currently, the AI confidently reports “finished”.
I still think of it as a very eager young geek - knows it stuff but is miles too confident and makes mistakes as it is rushing just like an overly confident junior. Some day this may change but I don’t think the current LLM approach will solve this fully unless it is very tuned to “think” properly.
youtube
AI Jobs
2025-12-29T10:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyShVCiFohFbAuxHbF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyGKKhCNWKpWmE0QwJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxWlY8wKwlC3IVD67p4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzjN5oB7JY8AksUHH94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy5UMPycm4SjvWi_mZ4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw5HiA2Fy0j6lNiMAd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwGUE3QM4188oI13jB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz417YVrvH0_gMyDE94AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz3dMe7Z1hyMM1JIEx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxpF70ghTL3Lsr47EV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]