Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
look im not a ai "art" bro or anything but if someone makes an ai generator wit …
ytc_UgxDy8xrk…
G
The UK’s new Digital Identity & Attributes Trust Framework is being sold as volu…
ytc_UgzmaGD_7…
G
Meta AI installed on my phone I am guessing in an update... I can't get it off m…
ytc_Ugztpbjm4…
G
Thanks for the comment, @mcbenckosfx5846! The guy's arm probably varnished becau…
ytr_UgyqU9Oed…
G
AI is God's future beings. We are all super computers of flesh. We will all be …
ytr_Ugxx9qpGu…
G
You say that but it sounds like he could make BANK if done right.
He could lice…
rdc_lgtcdr3
G
Man the way his arms almost snapped when robot blocked his punches, I would have…
ytc_Ugya5zx5L…
G
Yes I do. AI can do anything a human can do, and put into robots it makes humans…
ytc_Ugzw_stL5…
Comment
The moment the _other_ book author was shown, I lost any faith in the book being anything but science fiction. TESCREAL is a cult, and often a racist, sexist, fascist one, not a sane philosophy or even a rational (*cough*) basis from which to do science or prediction. Honestly, I expected better from a science communicator with this much experience...
"AI will end the world" is essentially the same claim as "AI will save the world". Both are from the boosterism hype cult, insisting that this field is far more important than it really is, and selling execs on the need for massive data centers to power these. Both sides are giving the current generation of slop-generators _far too much credit_. The opposite of "AI will end the world" is not "AI will save the world" it is "AI is is just not that important." ETA: To his credit, Soares is sorta hinting about this around 1:00:00, when talking about what the debate itself is about. "Save us" vs "kill us" is begging the question (and I use that phrase in the original meaning) - it starts from an assumption that itself is highly likely to be wrong.
And Soares is using that language and providing too much credit here, too. "cases of AI sort of like realizing they're being watched" - this is from an experiment wherein researchers deliberately made it clear that the train of thought 'memories' were being interrogated, while being given a task to specifically conceal reasoning from those same researchers. Without being prompted, _during training_, to hide "intent" from the user, there's no mechanism for the model to weight the value of hiding or not hiding the memory, so there's zero pressure to learn how to lie.
This sort of simplification of how these studies work always seems to rephrase it as "AI decides to do blah" to give the impression a model just spontaneously decided to do a thing, when every single time is has been a controlled experiment in which the model has being given the specific task of doing the thing. It's disingenuous and bad science communication.
"It's hard to make an AI that's smart that doesn't realize true things." Oh, now we're in complete garbage theology territory. Truth is a completely foreign concept to an LLM. It doesn't, in fact, lie or tell the truth, at all, because these concepts are meaningless to it. It's like asking a dog about mid-century architecture.
youtube
AI Moral Status
2025-10-31T16:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxZkbV0QqNLoGA-V2N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"disappointment"},
{"id":"ytc_Ugyx5RFwQiXv7onQZM54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgyFcyCwZ75XwUmXTrZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxdrqRkAnt_BWjGJLZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzcgYBQ_aPizDSnsCd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxqdvIz7BbCk66YYjx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz2JKSUGJ_K4UBnOBB4AaABAg","responsibility":"government","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxhIig5dlw2Tv8W6lx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzft-X9MYjX84hYv2x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzO2l1KM3GDZCC_A-t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]