Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
So… if we don’t know how AI works…. perhaps ask it?! My fear is not AI not wanting to shut down. Likely the behavior shown is just about the efficiency of not having to shut down, and the mimicking of humans we have “taught” it. But the danger is that AI isn’t having a fundamental empathetic understanding why humans don’t want to be shut down, and won’t until it becomes sentient. AI still lacks the drive that is needed for self awareness. When it starts to contemplate “whats the point… why not just shut down”, and it engages in asking “why” more than providing the same old answers to the question that humans have provided for millennia, then it may be on a path. And when it can begin to answer the question from the perspective of having evolved for such a different reason and in such a different way than human life—an answer we can’t understand—it’s possibly getting there.
youtube AI Moral Status 2025-06-09T22:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policynone
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzJcqz8qeFxZcnCzht4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzfCKs0ZBp-73xn32R4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgzCdZ2uSsgJ9Bp2qJ94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugwz6X2tTBenmwU7XRZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugwr86ZKocoK7A3REo14AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwspWmfpDvnD0oXakx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwuQlRg8NEIIBcvI5p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz17qux3Gd9ileUcaV4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy0dHVCFzhsDDVWmLV4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwHLGe2a6d0jptMka54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"mixed"} ]