Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
What always confuses me about armchair philosophers saying A.I. isn't conscious is that they're making an essentially religious argument. Either you agree with the vast majority of the scientific community and believe that human thoughts, feelings, and behaviors are largely deterministic, therefore making us advanced biological machines, or you believe that humans have some intrinsic, immaterial, unmeasurable quality, like a "soul", that A.I. can't replicate. You can't avoid that dichotomy. Even if you do attempt to dodge it by saying that A.I. understands syntax without semantics, at this point the Turing test has been smashed so thoroughly that it's no longer relevant. Philosophers have spent millennia trying to figure out whether other humans experienced reality the same way they did, and were not "philosophical zombies" (look it up, spooky concept), and you know what the conclusion was? "We can't prove it, but it doesn't matter anyway so let's assume they're not zombies." We're reaching that point with A.I. The only thing holding them back from understanding reality the same way we do is the five senses we have that they don't. That's a fixable problem. So again, either you accept A.I. as machines the same way that humans are and can and will eventually match or surpass our experience of reality, or you accept the inherently religious argument that humans are somehow exceptional. Unless you adhere to my specific faith and can be simultaneously religious and believe A.I. can match humanity, you're stuck in that dichotomy. If you can't figure your way out of it, then congratulations, you've just invented a new kind of racism.
youtube AI Moral Status 2025-06-11T17:5… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyuMS6IsEZnwnQ3TSF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzM9YOoUv6G1FoDnF54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_Ugwp0SBspmpMQVqQhkt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgydgL68AFtssJcI-sZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxY9s_v8TYOyJ7YE0h4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgzV_wjSrgOFc54u7EJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzYCAr29FcGFzD1FJl4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgxxMufOFvBUJ3Rj98J4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgwbPX6s9l1VSjZNMmB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwH9hyYJm3sSHARag54AaABAg","responsibility":"none","reasoning":"mixed","policy":"ban","emotion":"fear"} ]