Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
EVERY secretary, customer service worker, tech support, etc... aka everyone who …
ytc_UgyRDbeN1…
G
@SneakyEmuthis entire channel is jimbob literallt arguing against ur dummy ah a…
ytr_UgzhTQc-t…
G
After all cars are made autonomous (which will come with time) the cars should b…
ytc_UggDy9xEJ…
G
58:00 Very naive to think that the politicians aren’t aware of the power that wi…
ytc_UgxkFozAi…
G
I don't know, AI is going to have a hard time replacing me, I'm an Indigo 15K di…
ytc_UgwViHJkC…
G
@MrGrantGregory no I mean METHING as in your on Meth this was a Russian fight w…
ytr_Ugx0Ac8QM…
G
You missed like, everything. They explicitly state it will respond like what it …
ytr_Ugwei_7KP…
G
humans have been destroying the planet and life on it for a very long time. worr…
ytc_UgzXcPZau…
Comment
I'm most cynical about the quality of code that's fed into GPT. The only way GPT will ever improve more is if the training data improves, but with things like undecidability or the halting problem, there are literally no solutions. To use Midjourney as an example--image generators produce amazing images because we're very good at feeding them high quality image training data. Humans can very easily recognize what a stunning photograph or painting looks like. Even coders with 20-30 years of experience can't necessarily look at code and tell if it is high quality, so there's no easy way to improve the training data.
youtube
AI Jobs
2024-01-17T18:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwR60PDnaiWnx-ljJx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzTsA7W6T-ATyUL9Kd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy6xYwhRM_1DGv0hqZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyf1pd0c_Fo8acHMB14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzwQZ-rD_MrTC_QgLt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzj_e--UqYRk3wfj2Z4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxsm3ZVt_l2uCx57Jt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwDI2OH_Eaz3Fos8U94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxaW59kXLpIWrSkCPx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxD0jkN5QdxTHZP0ep4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]