Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI, Autonomous Insanity. The Trump regime runs on high octane stupidity with a t…
ytc_UgwSrE7LG…
G
AI can never replicate Human skills and creativity completely, AI can only compl…
ytc_UgxI5Jnld…
G
Actually I personally think that making a nice plot with the ai is good
And then…
ytc_Ugx_0ejMz…
G
Hulk hogan predicted this in a 1998 video that is very hard to find. He broke do…
ytc_UgxTV3Tft…
G
@laurentiuvladutmanea so...if artists don't care about the money at all, why are…
ytr_UgzVed8Hm…
G
Police: we made a mistake. The facial recognition was wrong and misidentified
…
ytc_UgyKymhHg…
G
I asked Chat GPT about this, Heres the response.
Yeah, AI bros who steal art to…
ytc_Ugx1b5TuD…
G
@Manget225 I think so too!! Me and my friends had huge arguments about the situa…
ytr_UgxhmNalX…
Comment
I love how these comments are either extremely in favour or extremely against AI in coding. Hilarious.
The reality is it performs great in many programming tasks you ask it. Claude + cursor combo does wonders. But it's definitely not perfect at all. Especially given in the hands of junior devs, I think it's a bit dangerous. Models tend to hallucinate and sometimes loop infinitely over a question. It also changes unneccessary pieces of codes. Sometimes it doesn't clearly understand a prompt, or only partly does the job and you need to point it out what it did wrong, despite the fact there is a clear input and output example. It remains token prediction, so there will always be a piece of randomness unfortunately.
But in all fairness, it does great it in a large majority of cases so far with minimal reprompting. The quality of the prompting and properly contextualizing what you want plays a huge role as well.
youtube
2025-08-07T23:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwOaYclmw6hzZ9aic54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwImB_UQIWaT7aUieN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz5XKlvRBxYvjBbOKl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyf83Zqk0RZXi6jazR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzbDaqE3pquBYHMa7B4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxP7z-Nvn2yWQJ0lGN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxOkCaNsg8y2fhyyx94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgwIuOVCDESs38qQz2R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxQmh8pi6kudxygF-Z4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxdw08m8QRVwcMB8dB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]