Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Oh, no, not Yudkowsky. I'm not sure I can watch this video out of fear that it's…
ytc_UgwV58WTv…
G
You have to be an owner. An owner of assets. An owner of an AI. There will only …
ytc_UgwCzSIFD…
G
Ai should assist laborers not replace them. Shorten the work day, shorten the wo…
ytc_UgxKJ4wOo…
G
Automation is to reduce the cost of things. If there is no cost then everything …
ytc_Ugz3STXjL…
G
You bring up a crucial point — clear, detailed reports are essential for effecti…
ytr_UgyPqFl7p…
G
if you noticed you will find that the jobs that are really high ranked are nearl…
ytc_UgzY8ii3z…
G
"you don't understand you simply HAVE to give us 10 gorzillion dollars! It's the…
ytc_UgyK3-m5W…
G
Many see this coming. I’ve had long chats with Gemini, Claude, and ChatGPT abou…
rdc_o762aum
Comment
Andy really needs to understand what concepts such as "functionalism" "hubris" and "wilful blindness" mean. That (important caveat) said, however, let's address your own thesis. Your claim is (something like, you were a touch vague) that people such as Mr Bostrom and others who aren't so ignorantly myopic as to reject the prospect of machine awareness out of hand, hold such an eminently logical and indubitably sensible position because they lack the ken to appropriately discern the essential differences between the computations undertaken by a human brain and those computations undertaken by a machine. The critical point of error here is, in your opinion, that such intellectually barren people cannot apprehend that one system operates on a biological substrate of neurons, while the other system operates on an artificial substrate of neurons (aka. "floating-point numbers represented as weighted vectors in a higher-dimensional Hilbert space). "If only", thinks Andy "people could somehow see that human beings are NOT the same thing as machines!" And then Andy has the further, absolutely brilliant, insight wherein "Ah-ha! If "people" (my emphasis) simply understood that there are some systems that, while they may be indistinguishable from another system on a functional level, are, when viewed from a different perspective, nevertheless architecturally novel and distinct!" And, filled now with the buoyant jubilation and effervescent verve that naturally accompany such epiphinous moments, Andy determines to go forth and spread the good word throughout all the many and varied lands of YT comment sections.
And on the first day, Andy said "Do you even know what a simulacrum is?".
Ok. Good. And to which my reply would be "Who the fuck truly thinks that a computer is LITERALLY a human being?". Because I've never met someone who believes anything even close to this.
I mean, sure, given a blind method of communication such as text or voice it's not possible to tell a human from a frontier LLM (and if you don't believe me just go over and check a few subreddits, of which an estimated half of all back and forth communication is now being generated by LLM's conversing with both humans and one another. And, once you've checked and determined for yourself that this data point must be bullshit, you'll have proved my point perfectly. For the contention is that YOU CAN'T TELL THE DIFFERENCE. And it's true, you can't.
"But a machine is NOT a human!" Andy doth protest. "And the people on the Star Trek holodeck aren't REAL!"
"Ummm, well... yes?". But this is an entirely trivial observation, and I'd challenge any two-year-old out there not to arrive at the same conclusion. So, congratulations, you're ready for preschool now!
The point, Andy, of this whole discussion, is not "How can we possibly see the difference between minds based in carbon and minds based in silicon? How can we draw a distinction between minds with output states determined by electrochemical signalling, and minds whose output states are determined by electromagnetic signalling", because that's a purely trivial task.
No, the question being posed here is "By what means or methodologies may we come to understand what the processes are by which, despite the existence of such glaringly obvious differences in substrate, artificial minds can have come to be such perfectly indistinguishable simulacra of our own?"
And, spoiler alert here, Andy, the current answer is that WE HAVEN'T GOT A FRICKING CLUE!
The best, and by that term I mean the VERY BEST minds on Earth - that is to say, those people that have built, designed, and programmed them - openly admit that when posed with the question of just how it might be so that an LLM, whose base code is barely longer than that which operates a domestic Nespresso™ coffee maker, can then, simply by having the entire internet fed through it, somehow come out the other side such that, from a functionally behavioral perspective, it then shares the same characteristics as those of a human?", they have, at most, an insight in numerical terms that amounts to something less than 3%.
For the truth is that NOBODY does know how they work, and the 3% figure is exactly and precisely that which Dario Amodei, the CEO of Anthropic, makers of the Claude series of LLM's, recently nominated as being the high point of their understanding.
The moral of this rather lengthy tale is then, Andy, that NOBODY is oblivious to the differences in substrate between man and machine, and that EVERYBODY is oblivious to how such simulacrums exhibit behaviours that are identical to our own.
Now, there are two broad epistemic means of investigating this most strange conundrum, you can believe that what these machines are doing is somehow very special, or you can believe that what we do is not... special, I mean. For it can't go both ways, one or the other of those two views must be right. Which will it be for you, Andy?
Oh, and you're most welcome. My pleasure!
youtube
AI Moral Status
2026-04-18T12:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgyFzCBkbx55g4jSHQx4AaABAg.AVjDlzKdNYQAVkh9F9B59i","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgyFzCBkbx55g4jSHQx4AaABAg.AVjDlzKdNYQAVkhzNwxDhS","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytr_UgyFzCBkbx55g4jSHQx4AaABAg.AVjDlzKdNYQAVkrOHqqpjE","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgyFzCBkbx55g4jSHQx4AaABAg.AVjDlzKdNYQAVl5xNQ_k_N","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgwQQDqZqpRcG3Bp1FF4AaABAg.AVj4fG0laDAAVjPNj3TFEp","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytr_UgwQQDqZqpRcG3Bp1FF4AaABAg.AVj4fG0laDAAVjYTyU7B_i","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytr_UgzGc2rCtQpc8tbxbxR4AaABAg.AVixbOJHVEaAVjGmi_cab5","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytr_UgzGc2rCtQpc8tbxbxR4AaABAg.AVixbOJHVEaAVjPjypvfRn","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugzehwm8EI65FO1cm-d4AaABAg.AVitq7cABHLAVjzVzN7XIh","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytr_UgyljG8n0jMgFpdwZH54AaABAg.AVisRVRuhf9AVjr_A6Jk5g","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}
]