Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@Speaker-Beater You're just hating AI for the sake of it. so, why even bother ,…
ytr_UgyDJT-Mk…
G
I'm not smart enough to question the test however if we haven't had an AI before…
ytc_UgwM9NEWJ…
G
Perhaps some of them have a strong personal interest in spreading the hype which…
ytr_UgyiWM1_r…
G
AI in healthcare is scary enough, but imagine if we had structured workflows to …
ytc_UgydeD7fu…
G
honestly lets just start bullying people that use ai-art. Dont you have any frie…
ytc_Ugx1txpX5…
G
Lmao listening to this interview about AI taking over the world while chatgpt is…
ytc_UgyZzuZmY…
G
Rather than viewing AI as a source of economic displacement, we should recognize…
ytc_UgzBTd4hf…
G
I might be in a minority but I love the trippy surreal AI mashups and many are v…
ytc_Ugyc37GL7…
Comment
I dont understand all the praise about this interview. This person is supposedly an expert in AI, and not once have I heard him speak about the shortcomings of transformers and the doubts that it can take us to AGI. This feels like bullshit on many levels. Ask any AI to do some midly difficult intellectual job (like reconciling some financial data between a few files and using judgement like any accountant or auditor would) and it will fall on its face miserably.
What we have now is not intelligence. We have some highly efficient search engine that is really good at impersonating humans and retrieve info they have been trained on. But the moment you ask anything out of it's training data, it fails like a 5 yo. How can it be so good at solving highly complex math problems but be fooled by a common brain teaser in which we introduce a small variation than any child would notice.
Those things are far far far from being able to generalise what they "know'". And then wont be unless there is a major architectural breakthrough. This (those) breakthrough(s) could happen in the next few years or decades or never. So the current state of the art is irrelevant in the context of AGI, let alone ASI. And, yet, everybody seems to thing it is around the corner. The level of delusion of seemingly intelligent people is mind-boggling.
youtube
AI Governance
2025-06-21T16:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxZf173wncWqI17rfd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgynxRAQTlanc2TAV2p4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw2srNuaDr_yyKdNmd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz62Y5J7DOR-BCvs8J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyx2dPMo8AS-fMeSml4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwah_aD1wZmRnyC7eF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxSMVaAKNLYu-uyhyd4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyDaOy2zMk5UYLbVfh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzof2RPp_zLGXg7h9J4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzGJCSuBBXdASfUj0F4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"}
]