Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"The further you go, the less you know." When contemplating the implications of …
ytc_Ugz9cwoNS…
G
@Noname-rw2gi I'm not skeptic, OpenAI in the form of Sam Altman was saying thing…
ytr_UgzVkwKFd…
G
yall need to leave my ai alone 😂 Im just tryna make music man. i write the songs…
ytc_UgybuTd9I…
G
So a guy from Australia found a (previously nonexistent) cure for his ailing dog…
ytc_UgwDBzgIG…
G
AI "art" is the same thing as using google images, but google images take a bunc…
ytc_UgzT1ubUy…
G
Ai will never replace us, because we are story makers, even if AI would be super…
ytc_UgygUL-9J…
G
AI is an employee that constantly has to be corrected and retrained. It makes st…
ytc_UgzpFFw_X…
G
AI boosts productivity, leading to mass layoffs, no income for people (homeowner…
ytc_Ugx-_8bjP…
Comment
From my own personal experience, as an student, even with the most precise requirements the AI can do the job correctly. The thing ends up fcking the code and doing things wrong for no reason and then you go and check the code and try to explain "hey this is wrong, you made this A thing but you should've done this B thing", still, the AI continues to halucinate.
I personally don't like these "AI" because they are nothing more than an overhyped bot. It's just a probabilistic model, it's response is based on the "chance of being correct" and it is not really useful in the real world. All I know is that, with AI, the companies now have humans working more on rewriting code that is wrong than producing code.
youtube
AI Jobs
2026-03-20T11:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgwaolCSCrbAmzmXPy14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzE5vuj-DLOECBsDzV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz_OZ-y_IxebN0dWOl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw3cdAjIfc7VJckYAJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwX7qjzo1jevbrvHc14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz1DARepnEMK3u2lPB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgytrV3NOnPLyOb7RQ94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzgFuK7fSuKvYnKJAh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxRrFppSFBbTWXiKO94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxlVaW17VPUUpRsX4V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}]