Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The critizised statements of ChatGPT were in fact more true, than any human statement can possibly be in such a moment: When humans say things like "i am glad you asked this question" or "i am excited to talk to you" then this is not essentially and 100% true or even real. It is also a result of social programming and learned behavior. Sometimes it´s (partially) true, sometimes it isn´t. When humans say things like that, they are basically reassuring the other person of their willingness to talk, help or whatever. And no one can be sure that this willingness does not change at some point. ChatGPT did the same. Although this bot is not "excited" or "glad" in a human sense (it would be the users fault to assume that this is possible or nessessary), it absolutely meant what it said: That it will take part in the conversation and it will be fully available. And this promise is nearly infinitely more true than such a promise from any human could ever be (technical errors, downtime or power plant failures aside). What Alex does here, is ignoring the underlying function of communication. I think he does this on purpose, cause this video should be entertaining and clickbaity. To really "lie" perhaps one basic requirement must be met: to have an underlying intention other than what one claims to want. And i think this is more true of any human alive than of a program built so serve.
youtube AI Moral Status 2025-06-25T10:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxAmwGnQSQj9bJFiU94AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxS-KNJxochd5BiPdR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwZUzQle3ydXju6A-N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzNC5hx-1ucbt19vGJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugx8iwmx1IPuG4vX4_p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx5KPLYSr8ZuCaBbvJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyyKeNA9Gx7b6GvMTh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugyem7-_Vy0TXCF_hBt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxzS2zbnX3l2XtaeCd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"ytc_Ugzsr6nZSU-YOzo4qYN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]