Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@queeblo545 Not when you account for model cannibalization. The more AI feeds on…
ytr_UgxxoP5n0…
G
didn't this happen in the 60s with machines, people were like "with robots doing…
rdc_cz32dni
G
I think if robots gained AI they would be stumped by the same things that stump …
ytc_Ugyuo4hN9…
G
Get some fucking skill AI artists.
I don’t have any artistic skill but you don’t…
ytc_Ugxr2IEH9…
G
A interesting point about this this is how we feel different about different typ…
ytc_UgyrLcbTU…
G
The truth of what can actually happen is even more terrifying than this episode …
ytc_UgzJOvyyl…
G
We appreciate your interest in Sophia! While she may seem charming, it's importa…
ytr_UgynWLIDF…
G
@LavenderTownebecause you are being a midwit lol, when you don't know what you …
ytr_Ugy_0Gx-t…
Comment
AI is good for prototyping and shipping something in a time critical manner, otherwise, no. It's still faaar from perfect.
Also, my opinion is that, it's easy to be critical of someone else's code rather than your own. I know AI misses a lot, but it's not entirely bad, you could use most of the code. So even if it's not good, it's better than making it from scratch.
And imo, AI prompting is a SKILL, someone who vibe coded many apps can get the ai to generate good quality code just by knowing how to prompt each model (some models respond better for certain prompts).
youtube
AI Jobs
2026-01-19T17:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgyDZKSrZChi35hSQBp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxy1cWjHgipSBl1TRx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyVkBIU8x_0vWiXWFt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw4HPyJH_VBaqWaPgh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzqXXcg3FmVqLhXXf14AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugw7fIfHl6UOI0wIcC94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy5Ka4Ah4tUWir8tvh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwTxyrq7XFGq-QmA6d4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzBuhum89E8DF23rTd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxnaI_zhKb9fVEXXTx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"})