Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That is how ChatGPT works though. It has no knowledge of anything besides its tr…
ytr_UgydGQ7P8…
G
Gabi, thank you for making this follow-up video! This is something that artists …
ytc_Ugy9FISIy…
G
Imagine waking up to find out an ai said you would commit a crime and then you g…
ytc_UgwZY1WN5…
G
Written tests using ai-procedure is no different than using google to answer a t…
ytc_UgxyI9ISV…
G
more tech nerds talking AI....do you guys work for google too...I think AI shoul…
ytc_UgzEkBBNL…
G
The average person does not wish to ask AI questions. They want AI to do somethi…
ytc_UgwMpKYGC…
G
You'd think so, but I've had discussions with people on Reddit that insist that …
rdc_jmu53et
G
Felt my IQ dropping during every minute of this video. Checked out at 6:19.
Ser…
ytc_UgxKolA1C…
Comment
AI is only as good as the data set. In order to make the AI perfect, you would need to careful vet every single line of code you plug into the data set. They all have to be perfect.
This goes for every single other AI use case.
Now think, is that really possible in the next 20 years? Nope. Even with AI agents piled up... It needs brilliant humans to stack it line by line. Which is unimaginably expensive.
youtube
AI Jobs
2026-02-10T15:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxncSomFDtuIo0-weV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyH6oQDU2Ozipx71UJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugwh6_v21TXRFwCB-y94AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzXJI6itbYhB6plLvd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxX7y0qJE2DncfTjxd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxMgCYwYJjtzws5vhx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwRY9yOdd8PMBn6VB14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxlOUZizhSSCfnMm1J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzjkYxP5Y_kD0Rmxp94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"mixed"},
{"id":"ytc_Ugymhxyhy7jyF2kCTnZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]