Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I agree, but I don’t think you’ll ever get a high paying job based on how well y…
ytr_UgyHPliiA…
G
I've never read so much in my life since AI. I've also learned three programming…
ytc_Ugwzjgd02…
G
I think AI has a tremendous amount of potential. Many things rely upon computers…
ytc_UgxR64j5W…
G
Lawyers EXPLAIN it. You get one malpractice case go against 1 radiologist the In…
ytc_Ugz-KS1nl…
G
If they made that robot like they make the rest of their stuff, they won't be wo…
ytc_UgwFibX1T…
G
no, ai is not hiding its full power... this is just more hype building to prop u…
ytc_UgwgwvknF…
G
Loved this! 😂 But it also made me realize how easily brands can miss out if they…
ytc_Ugws3R4zB…
G
There are positives to AI when held in the right hands of those whose core inte…
ytc_UgzyZThJw…
Comment
80% accuracy isn't good enough. If the LLM SWE is set free on a large codebase, that 80% accuracy rate wrecks everything.
Not to mention, LLM-generated code is often just badly written and implemented. It will patch issues versus solving the issues, most times.
Yes, it is possible to increase speed of output with good planning and design. But for it to work, the model must be babysat.
Also, models often completely ignore instructions, so automation at any kind of scale is extremely risky.
LLMs are cool and have their niche uses, but as a replacer of competent humans...no way.
youtube
AI Jobs
2026-02-26T17:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugx09uuVgn2i12SbWI14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw0i1-J_M992mamd2x4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxAE4a1r2vSCMTW8YB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwxsKasPmsaodW8Cf14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzGOVqQ5fiaHosfn494AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw_bs2RBnYwu4RvtOd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxhhqMR-qsCp5u8DNR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxJ3ikTo36tdVl15Sx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzQGvrMNYckuSTq_Id4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzmVAkRR_b9w9oStyV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]