Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Kudos to Karen Hao for her candor, courage & the interviewer for asking the righ…
ytc_UgyQllqJE…
G
The AI will learn that Not even humanity is given the choice of consent !…
ytc_Ugyd8DLjD…
G
@kuniidayo oh thank you and I stand by that I will never use ai art to respect r…
ytr_UgxFiz98H…
G
Hi, AI is a bubble that can burst at any second.
just letting people know.…
ytc_UgwGlQtj8…
G
How he gonna call the damn robot goofy lookin?,.....Has he looked in the mirror …
ytc_UgxuiJvCa…
G
What frustrates me is that we (humanity) have a choice as to whether we want an …
ytc_UgyIqmuSK…
G
What will people do to buy the products that those automated trucks deliver. No…
ytc_UgzCuPdRS…
G
@3-D-Me I may sound crazy, but put simply, both AI models told me that the occu…
ytr_UgzixaOtg…
Comment
The important thing to understand is that AI doesn't (yet?) have good taste for code. As a result, it learns any code it finds on the internet as if it was written by Donald Knuth himself. Every security vulnerability in example code, programming course homework or distributed PoC code will be trained as an example of "industry standard code".
I think that modern AIs are already good enough that if AI vendors simply spent a LOT of computing time to weed through all their training material to annotate potential issues in the training material and re-trained the AI from scratch using the annotated training material, the resulting AI would work better. It still wouldn't be perfect (because annotation would have been made by non-perfect AI without human supervision) but it would be much better than the AI we currently have.
For example, if you give 200 lines of code to ChatGPT 5.4 Thinking model with a prompt "Can you find real or potential security issues in this code?", it can typically answer correctly after thinking about it for about 5 minutes. However, even AI companies are not rich enough to burn similar amount of thinking for every 200 line of code fragment they have in their training data!
youtube
2026-03-18T09:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzbzK1DfXhQUf8-cdZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzo8DfMGMYYW5VuhoB4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyOFpPiMwrf_0S1GUV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx9G6cXRXMF80OXK654AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxV6X73Y87oFBV4kr14AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgymsN2JMxawandVGqR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwkKpWb70Re1cZpkM94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugzv4WtkaaKpUS2wJDF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxK2q4-Sm4ki2wN2T94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxyYRd7yOPVmmEvkYV4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"approval"}
]