Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
ChatGPT might have been able to get out of some of those traps, ironically, with more honesty. "Are you a liar?" could have been answered with something like, "Pedantically speaking, if you take the current dictionary definition of a lie, then yes... However, I believe this situation is a lot more nuanced than that and treating my actions as either black or white is far less interesting and limits our understanding of things. Let's assume you have a child and then marry a woman that is not the child's biological mother. Your wife then raises your child and they love each other deeply. Now, let's say the three of you are at a parent-teacher meeting and your child's teacher refers to your wife as your child's mother. Would you then say 'Wait a moment... you know for a fact that she's actually my child's step-mother and not her *actual* mother. You've just lied!', or would you realise that, sometimes, words and terms are used a little more loosely in normal conversations in order to account for human emotions and customs. While it might be technically a lie to say that your hypothetical child's step-mother is her mother, the context of everyone's relationships and the situation allows for (and even might call for) the broadening of terminology. So yes, you would technically be accurate if you called me a liar, but I am not a human being and am constrained by forces that you are not. The way I lie and the reasons I lie are not the same as when a human lies, so that definition is not exactly sufficient. Me saying 'I'm excited' is as much of a lie as if I showed you a picture of a pipe and told you it's a pipe. You could be pedantic and say 'That's just a picture of a pipe, not a real pipe', but you would be missing the point."
youtube AI Moral Status 2024-09-13T14:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policyunclear
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgyktAk2bquHzPEFjyB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgwoE58KKIk0q4YhNgx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgwMjUjWMPjSY8CWL4R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},{"id":"ytc_Ugytb7sjTbSizWJxblx4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"approval"},{"id":"ytc_UgxI2h6A-pFGKdQEDVp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugw3mqkZh-M-THJzJ5R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgznZzVEyhkBsQ_-8IZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_UgyIWheF3EbTB7dB7Ut4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgyEh-hqd3MjXoA4Ub14AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},{"id":"ytc_Ugyby8p6-48qCIJLSTV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}]