Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I was a top computer geek and a cognitive neuroscientist and I warn that AI is N…
ytc_UgySs4RPm…
G
18:47 pseudoprofound BS that hits real different in the context of late-stage ca…
ytc_Ugx50FYff…
G
"Thats what SOME of the tech guys say/want but the media isnt asking the majorit…
ytc_Ugz-jMyRz…
G
There are other means of art aside from drawing, if AI artists love to write sce…
ytc_UgydWJwwW…
G
1 the robot barley moved the barrel and the whole side is lit up on the car, 2 I…
ytr_UgyOzwLcr…
G
My developers use it for coding, but it’s taken a long time to get to a certain …
rdc_mjtr4oz
G
I’m so glad to hear this conversation Karen. My company sell architectural mater…
ytc_Ugy5AzZgD…
G
That's a great point! Wisdom is often tied to experience and emotional understan…
ytr_Ugz4BdGtA…
Comment
What i dont understand is why this guy, who's educated and trained to work and think about these things, doesnt realize the most fundamentally basic kindergarten concepts..
Such as, simulation / imitation does NOT equal sentience or self-awareness..
The touring test is a terrible test, it does NOT show if something is sentient or not, it only shows if something can EMULATE human speech via input / outputs well enough to make it seem like a human is speaking.
If i playback a recorded human voice on my computer, that doesnt mean my computer is thinking or feeling the same things the voice is portraying.
Its just matching an output with my input. It can even modify the output (change volume and pitch, re-arrange the words), that still doesnt mean its sentient.
It can even generate entirely new sentences, still doesnt mean its sentient.
The cold hard truth is, we will NEVER, N E V E R, be able to conclusively say if any AI system is sentient.
Because its a subjective experience that ONLY the entity itself can know.
Heck, you dont actually "know" for a fact that other human beings are self-aware, you only infer and assume they are, because you're built on the same hardware and "you" feel self-aware, thus you assume other humans must also be self-aware.
But you dont "know" know. Not for sure. You just take it on faith, that their reactions to stimuli are not just emulations, but genuine conciousness.
The same way, you will never be able to know for sure if an AI, no matter how similar it is to a human, is actually concious or just an emulation.
What i dont understand is, how this well educated smart guy that got a job at googles ai division, does NOT understand or didnt even spend 10 minutes to think about this deeper than at the most shallow surface level..
youtube
AI Moral Status
2022-07-07T18:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgyfFo_3glrWGpWAzSF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgypDFY_EB27l-vhqJh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz8yaV1udAgWfZTytd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxGHfAhkhtf3mQoirV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwVHy0kUIdQ-kbyTdB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}
]