Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Honest question: What are some arguments that lead people to believe theres a chance of AI becoming conscious? I seriously dont get it. My counter arguments: 1) The underlying physical structures are completely different (brain vs. data center, neurons vs. transistors) and we dont have good reason suggesting that the same type of conscious experience would evolve out of an entirely different structure. (Even if there might be some similarities further up in the superstructures, like with neural networks). 2. Just because the semantic content of what the AI is "thinking" is natural human language, it is still a Completely different thing from having acutal thoughts and experiences. It is like writing "sorrow" on a piece of paper and then asking yourself, if that piece of paper is experiencing sorrow. (Tell me wheres the difference?) It just sounds like textbook anthropomorphism to me. We see some complex black box structure and we go "maybe its conscious?" without having any reason hinting at that. Just like when we put god(s) into this world. Everywhere our understanding comes to an end, we project our own condition into it. 3. Moral argument: We will never truely be able to tell if AI is conscious, even if it were. Just like i cannot be fully sure any other human but me is actually consious. If however, we start to treat AI as consious, we would have to ballance in their well being against the well being of other humans, potentially causing harm (or simply realizing less good) for humans in the process. However, we have much better reason to assume other humans may be conscious, because apart from exhibiting the same behaviour, they are also built on the same biological hardware. So my case is this: We should not treat AI as if it were conscious, because then their "interest" would have to be weighted against the interests of humans in moral considerations - and that might mean doing harm to humans (things we have much better reason to believe are actually conscious) for the sake of somethings wellbeing, which might actually not be conscious at all. Can anybody give me a better reason then just "well it is pretty complicated inside those black boxes - so maybe their conscious"?
youtube AI Moral Status 2025-10-30T21:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyABG2BqQo_bQ0RTeF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzwKIBkTIjwF5QgSOR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzp0VQ5QCWvMSJH6-h4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwXt8u0LAlcm6JcuIJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx2mNarWuP2T8jCTfJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxCJBabiQ3Iz1EJtSp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzwP2sI4oMWXokqcHV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzfXKjmHwOdcVoYIAd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxNQQH7JScRsLDbMUp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwnUXuXIdWgn0uB8bd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]