Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Question: If something being conscious means that it is like something to be it, then what experiment could be used to establish whether it is like anything to be the machine, because if the machine would have been expected to have behaved like that given its build and inputs even if all its behaviour was non-conscious, how could that behaviour prove that it was conscious? So for example imagine that the computer driving the behaviour was powered by a configuration of NAND gates (which are functionally complete), then if the behaviour can be explained by the NAND gates' configuration and state and the inputs the configuration received there would be no need to posit that it was like anything to be that NAND gate configuration to explain its behaviour, it could be assumed it wasn't like anything to be it, and a theist presumably would assume that. The atheist might claim that it would be the same for a human, but if that was the case then like with the robot the experience wouldn't be influence behaviour. Yet personally I am infallible on the fact that at least some of reality is experiencing, and I am basing that claim on the fact that I am currently experiencing, so while the atheist may not accept it, I am certain that I am not like the computer. The usual atheist trick is to hide the issue is to simply change what is meant by the word conscious, and simply define it as performing a certain behaviour such as passing the Turing Test for example.
youtube AI Moral Status 2020-07-13T17:2… ♥ 2
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugwqoed4lf_k2U0ltB94AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugz65U1X58QEexSDBx94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwyjPvCUroBKZ62kxl4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxJxByxdAhlPXPTdXx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwmcrwaXmz2NG4URFZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"liability","emotion":"approval"}, {"id":"ytc_UgwBUyKtpakcyl4wwIl4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugx60WlpA2rlF7T60LZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzwK0SbACHJ5NtbXL54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"}, {"id":"ytc_UgwlrLg5UKyLe_u7rrd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyRm2GnzhSvWmXj-YR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"} ]