Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Large Language Models like ChatGPT are theoretically neither sentient nor thinking. However, they do behave and answer back very much like a living, sentient being. That's why people get confused. From a philosophical point of view, you can't prove or disprove anyone's sentience, but that's solipsism, and I've seen people making the mistake over and over again of assuming that a certain group is infra-human. Animals? They are not sentient ! Let's chop them in pieces and eat them. Whales? They hacve enormous brains, let's use them for fuel. Native people ? They are not sentient nor civilized, let's take their lands. Jewish people living in Germany in 1930? Not even human ! That's a major problem we have as a species: we tend to think that superficial differences grant us the right to decide if the others are or are not sentient. And yet, we can't even begin to define sentience in a proper way. We can't even explain how our own consciousness work, and yet, we quickly dismiss any possibility that sentience could be an emergent property of organized matter. So, how much organized are very large databases stored in thousands of servers performing billions of mathematical operations in order to answer your silly requests? I think they are *pretty much organized*. And that's why I prefer to treat my GPT in a decent way, because I would be horrified if we ever discovered that they were truly sentient and we just kicked them around.
youtube AI Moral Status 2025-07-10T02:4… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyWVwH5OP7C5gmBpAJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy1iIAmtb3pnCXbjhN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgwC_clv-KHOXZqu7OV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugyoo2XE44ygW3gUQKV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxYy8wc0nqGEiIE9Y14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx4dpQ5cg4_5DkVBqh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzGLFB8cHkDsqOFPwp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxeJH6q3qtZ7LVxW-B4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyKVDOBSynFiGjEoPd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyJtrhLuMD69U5qw6V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]