Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The video contains multiple false claims and unsupported conclusions, repeatedly presenting unsettled research debates as established facts. While it correctly identifies some real limitations in today’s AI systems, it consistently exaggerates those limits into claims of impossibility. Prompt injection is framed as a fatal, unsolvable flaw when it is in fact a system-design and security problem already being mitigated in practice. Generalization is dismissed as nonexistent despite extensive evidence of cross-domain transfer, zero-shot reasoning, and emergent capabilities in modern foundation models. Hallucinations are treated as inherent defects rather than as alignment and grounding problems that are steadily being reduced through retrieval, verification, and refusal training. Most importantly, the video conflates the open question of how to achieve human-level general intelligence with the demonstrable and growing value of current AI systems. From technical mischaracterizations, it leaps to economic predictions about the collapse of AI companies that are unsupported by empirical evidence. In doing so, the argument substitutes philosophical pessimism for technical analysis. The real picture is not one of a dead end, but of powerful general-purpose systems whose limitations are active research problems—not proof that the current paradigm has already failed.
youtube 2026-01-08T14:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwYHWbqZ54ejGxUq-x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz38yoNwCGprM9Gr3R4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz-hK1LOR8_MRDn6Lx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzqhyYrSJZgq10c9mp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyuJcq3hbENEtLnvyl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyTtnbsDrCRj52umzZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxKjiXlPpsmKLuOdGp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyvLH7rbAIn3V3ImIh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw72l4Fqx5K8MOcuTV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugyr-Dl4q-EPkMB37-F4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"} ]