Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Why on earth would you want the single fundamental value to be *simple*?! And, s…
rdc_cwmenf2
G
I liked these podcasts but this kind of title? I'm not even gonna listen to it, …
ytc_UgzJbbOzS…
G
Funfacts about ai: they sometimes mash items together or make way too realistic …
ytc_UgxN_yte9…
G
Fact: "Do you know before you can create AI, you should know Computer Science or…
ytc_UgycW0meo…
G
Krystal isn't the brightest bulb, but when she says they 'grew' they AI as oppos…
ytc_UgwRDowZC…
G
@paolovela9153 Thanks for your comment! 🤖 I appreciate your camaraderie in the s…
ytr_UgxMTx1R-…
G
AI will usher in the greatest transfer of wealth from the working class to the o…
ytc_UgxsG9xaA…
G
People not understanding what chatgpt and the other "AI" tools actually do, and …
ytc_UgxJog9AY…
Comment
I liked that you put "hallucinations" in, well, quotes (at 3:48). Because only humans (and maybe some animals) can hallucinate, "AI" (note the quotes) cannot do so since there's no consciousness, not even sentience, behind that. When I write AI, I mean what is now called "artificial general intelligence", and what for ages was (like back when I studied CS) simply called AI… it was quite obvious to anyone who dabbled in that field that you'd not call anything "AI" that wasn't true intelligence.
Oh, and "artificial general intelligence" is another pet hate, because it sounds like "generative AI" is some specialized kind of intelligence, when it actually isn't any kind of intelligence to begin with.
youtube
AI Jobs
2026-02-08T12:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugz-0tUImsHPOW0Whn54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_Ugxig-ilfcKHJZwsAPh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgwFdFyTVfcOe-DkV2h4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"approval"},{"id":"ytc_UgxwVza5EhO28fyQrW14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgwCvwMCosAQWzNdDkB4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"},{"id":"ytc_UgzLNDaq0uc1nDPRDON4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgyrcLUqFrNkga8xXMl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_Ugw1bhKxt1kHXxVus6N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},{"id":"ytc_UgzL51axPPGjx0kogYZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},{"id":"ytc_Ugz4RqFHUdu2mSQGKO14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}]