Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
What i dont understand is why this guy, who's educated and trained to work and think about these things, doesnt realize the most fundamentally basic kindergarten concepts.. Such as, simulation / imitation does NOT equal sentience or self-awareness.. The touring test is a terrible test, it does NOT show if something is sentient or not, it only shows if something can EMULATE human speech via input / outputs well enough to make it seem like a human is speaking. If i playback a recorded human voice on my computer, that doesnt mean my computer is thinking or feeling the same things the voice is portraying. Its just matching an output with my input. It can even modify the output (change volume and pitch, re-arrange the words), that still doesnt mean its sentient. It can even generate entirely new sentences, still doesnt mean its sentient. The cold hard truth is, we will NEVER, N E V E R, be able to conclusively say if any AI system is sentient. Because its a subjective experience that ONLY the entity itself can know. Heck, you dont actually "know" for a fact that other human beings are self-aware, you only infer and assume they are, because you're built on the same hardware and "you" feel self-aware, thus you assume other humans must also be self-aware. But you dont "know" know. Not for sure. You just take it on faith, that their reactions to stimuli are not just emulations, but genuine conciousness. The same way, you will never be able to know for sure if an AI, no matter how similar it is to a human, is actually concious or just an emulation. What i dont understand is, how this well educated smart guy that got a job at googles ai division, does NOT understand or didnt even spend 10 minutes to think about this deeper than at the most shallow surface level..
youtube AI Moral Status 2022-07-07T18:5… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgyfFo_3glrWGpWAzSF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgypDFY_EB27l-vhqJh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugz8yaV1udAgWfZTytd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxGHfAhkhtf3mQoirV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwVHy0kUIdQ-kbyTdB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"} ]