Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
One of the things I see in common between Yudkowsky, Hinton and other AI doomers is their _very_ imprecise use of language in describing their own ideas, their common anthropomorphization of the systems they've developed, their faulty causation or logical assumptions (like somehow "the AI kills everyone on earth", which is preposterous on its face), their common tech-bro belief in their own overweening intelligence, that their expertise in one area somehow makes them experts in fields they clearly know nothing about (like biology or even epistemology or analytic philosophy), like Thiel thinking he knows something about eschatology, or Andreessen thinking he's some kind of philosopher, or Musk thinking he's an engineer despite having no engineering training whatsoever. Like maybe the reason Ezra seems to at times be having trouble following Yudkowsky's arguments isn't because Ezra isn't smart enough or an expert in the field, maybe it's that Yudkowsky simply doesn't make any sense. As someone with experience in the field, every time I encounter one of these conversations I come away worrying much less about AI, much less any "AI Apocalypse," because these arguments are on their face not remotely convincing.
youtube AI Governance 2025-10-15T11:0… ♥ 13
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgwYuhFUceLUp0DLTQl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},{"id":"ytc_UgzHgKJD8ED47ov2Nld4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgyeX3fsEXHIblcijz54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},{"id":"ytc_UgyYUBC6bjY3u0o51ox4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgweJJ5Wqx9AE1Cc4W94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgzaWCl__DKhlDGg9Qx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},{"id":"ytc_UgzqjyJAnCWCF6AbzlZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_UgxYL4689kpmBK9Nlyp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugx5R6RPeKobIC3_9et4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},{"id":"ytc_Ugy8M5dKr2wOu7Bq50N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}]