Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I have people around me on office phones and walkie talkies talking about things…
ytc_UgzrfO_Eg…
G
Unfortunately I just used AI to generate the code of my ML project. It wrote a l…
ytc_UgwXCmyCv…
G
Using ai is the same as using a camera. Do you blame the camera for creating ima…
ytc_Ugxwv6Clx…
G
I am polite to AI because I’m already used to typing that way. I don‘t want to g…
ytc_Ugz3tn5Dx…
G
I feel like this post is just anti-AI propaganda. The establishment is running s…
rdc_my6lmmz
G
jokes aside she actually makes a pretty good point
robots have been taking over …
ytc_Ugwpz5Iix…
G
It’s BS. I just dumped my ChatGPT subscription because it isn’t worth $240/year.…
ytr_UgwJ-QRI1…
G
its impressive at a technical level but that shit looks very aimless. its not lo…
ytc_UgylUg9dU…
Comment
This is a based and true take. LLMs are a probability machine trained on some good code and bad code, it doesnt distinguish between them. All it looks at is the probability of the next token based on the previous tokens. It doesn't understand logic. Relying solely on an llm to write performant, up to date and secure code is a bad idea. Week 1 of any stats course "we deal in probability, not absolutes."
youtube
2025-08-19T10:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyO2jj-psCEZwO7ZyR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxNuoqP9wnVDWLpbYJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz9fwEas3Ih8kdWKQh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxIC-2GUT48rlUQDnp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzkpHfB7JAIbQQ7E1J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwxvGtCD6VKwGekf_Z4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxBHYCqi8RewwnX4-V4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyMuffj2D3KLXcw-kN4AaABAg","responsibility":"stakeholders","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzrFTDrj6HjocAMDQp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzcxr2o7TvbaMv0ksB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}
]