Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yes, there is. Overseas generics manufacturing is a shitshow (sometimes literal…
rdc_grrq4mb
G
We will know AI has arrived when people start dying in the thousands but no-one …
ytc_UgwoC3ar6…
G
i feel like a lot of problems would be solved (or at least slowed down) if we ma…
ytc_Ugx8KpApD…
G
You wanna know the irony of it all … every one complains about Ai, data centers,…
ytc_UgzcUBn0y…
G
Art isn’t a thing your born with, it stems from passion, and that passion turns …
ytc_Ugy5tz2SL…
G
That's fixed if the code distribution on which the AI is trained is high quality…
ytc_UgzEPrjqa…
G
Hello! Much respect for Ms. KH, and her voice of truth about AI. I don't need to…
ytc_UgyH95N1u…
G
Thank you for explaining AI in layman’s terms! The whole threat thing has seeme…
ytc_UgwboUAcJ…
Comment
Wolfram sort of inadvertently reveals what an awkward, damaged, fake intellectual Yudkowsky is. I like Yudkowsky's preoccupation with the dangers of AI development. It's good if he can make anyone aware of and concerned about this existential problem - even if it's the fake lisp, talk fast and use a lot of jargon while changing the subject a lot to pretend to others (and one's self) that "You know, I'm a geek. We're geeks, we're smart."
The valley is filled with a lot of people with bachelor's degrees who spend all their time coding and for some reason believe that they are super intelligent scientists... in a religious cult sort of way.
Yudkowsky is worried about "consciousness in the universe!" Jesus. There appears to have been consciousness on earth only so far as we know and maybe for a couple of million years - in sapiens for maybe 120,000 years. The universe? We have to protect sapiens?
He even says that he'd be comfortable with his totally scanned brain data reproduced inside of a computer that was totally complete where his computer self would answer every question and behave in every way the same, that you could kill his "human body" so long as the digital self didn't know the difference and really thought it was him, he'd be OK with this.
This is such a deeply undignified and damaged and inhuman way of being. Maybe if people are like this, it would be better if humans were retired and less damaging and damaged species might continue on earth.
Humans have this childish tendency to really believe
"What if they kill us and then go on to not do anything very interesting with the galaxies they colonize?" What kind of damaged child man would even think a thought like that? It sounds like something an awkward and lonely eight year old boy. "The universe gets a little darker every time someone dies of old age."
I haven't heard Wolfram speak more than maybe once 20 years ago, but he is so polite and patient that I'm astounded. He must be a parent, grandparent and teacher to have developed such calm patience in the face of viscerally frustrating ridiculousness.
Humans ought to be concerned about preserving humanity because humans like humans and think highly of them.
Our current predicament with AI that's most likely is that a small group of people will get way too much power and use it to enslave everyone else and even to cull the population by quite a lot... maybe not least to make more electricity available for data centers. There are a lot of humans around because they were the only agents that could perform tasks on command until now. If AI can perform lots of tasks on command, elite power is going to use that because humans disobey, think secret thoughts, rebel, plot against you. There are limits to what you can order them to do. They are dangerous and expensive. That doesn't mean that elite power won't require lots of human workers, but we got more than eight billion of them and we probably would be better off with maybe two billion.
It's unlikely, but possible that AI itself destroys humanity. What's more likely is that competing elite power uses AI weapons, robots, etc. to destroy each other because the human sense of a lack of security with a sophisticated competitor is probably inescapable.
youtube
AI Governance
2025-09-13T18:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzCnJEYgS4Gt6aF4mt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy2SpFDrQzFraxnnKd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyOqt8CXQJ8RdsMO9F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyOOTryi47R_beqi6Z4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyIAa6SDkpXaA0j1dJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgymNBA5-cA903RN8d94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgySrkaLrZkp5tjm5ft4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwfKuG-8cR_batwnEh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzLC6OzO5UJWwE3sft4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwCFO_EfVHH2DcNgUJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}
]