Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
People are forgetting that as humans we love to control everything and we have t…
ytc_UgzvnBKWM…
G
For now I can't tell if this is a rumor or an actual statement by her but appare…
rdc_cjotmqw
G
C19 ~ and dangerous 5G everywhere wasn't enough of an assault on humans, animals…
ytc_Ugz0bGj4J…
G
I'm using AI to make my game I've written for over a decade....not because I don…
ytc_UgwRxJZ9l…
G
So true, I also hate so much when people turn their pictures into art or somethi…
ytc_UgxI_PXxm…
G
How dare a youtuber say that AI art is cool! It offends me because he didn't sup…
ytr_Ugxceq9Bo…
G
Reading the comments section I can tell that people still are unfamiliar with wh…
ytc_UgxwY9zVW…
G
So, AI can make art but it cannot clean the house or wash the dishes. Yeh…
ytc_UgwZL42t_…
Comment
One of the things I see in common between Yudkowsky, Hinton and other AI doomers is their _very_ imprecise use of language in describing their own ideas, their common anthropomorphization of the systems they've developed, their faulty causation or logical assumptions (like somehow "the AI kills everyone on earth", which is preposterous on its face), their common tech-bro belief in their own overweening intelligence, that their expertise in one area somehow makes them experts in fields they clearly know nothing about (like biology or even epistemology or analytic philosophy), like Thiel thinking he knows something about eschatology, or Andreessen thinking he's some kind of philosopher, or Musk thinking he's an engineer despite having no engineering training whatsoever.
Like maybe the reason Ezra seems to at times be having trouble following Yudkowsky's arguments isn't because Ezra isn't smart enough or an expert in the field, maybe it's that Yudkowsky simply doesn't make any sense.
As someone with experience in the field, every time I encounter one of these conversations I come away worrying much less about AI, much less any "AI Apocalypse," because these arguments are on their face not remotely convincing.
youtube
AI Governance
2025-10-15T11:0…
♥ 13
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgwYuhFUceLUp0DLTQl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},{"id":"ytc_UgzHgKJD8ED47ov2Nld4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgyeX3fsEXHIblcijz54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},{"id":"ytc_UgyYUBC6bjY3u0o51ox4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgweJJ5Wqx9AE1Cc4W94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgzaWCl__DKhlDGg9Qx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},{"id":"ytc_UgzqjyJAnCWCF6AbzlZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_UgxYL4689kpmBK9Nlyp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugx5R6RPeKobIC3_9et4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},{"id":"ytc_Ugy8M5dKr2wOu7Bq50N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}]