Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Something I personally think is scary is the fact that it values giving you what you want over what is right. For example, when using ChatGPT in my university studies, it often suggests "helping" me with inferior study techniques. When I ask it about this, it often admits that it knows its inferior study techniques but also knows that this is what most people want. And that's the problem, it cannot evaluate based on pure ethics, only learnt ethics, it cannot see that one of those things might be amoral. Furthermore, every time I confront it with a mistake, it doesn't accept responsibility but instead tries to trivialize it. ChatGTP is like someone who murders all the neighbors' animals and doesn't really see the problem with that, then, when it's confronted, it goes "You're right. That was very wrong of me; I should have known better. Now let's continue. Do you want me to show you why murdering animals is wrong?"
youtube AI Moral Status 2025-12-24T09:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugwm8_h2p9LNnmCDEj14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzI2bVlKYQMP1ZW_194AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy-zG3UoqRsCY7H6mJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx-KJsXL8HqlOyNJmh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyvrCnHEb3ujysE_Eh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugzg3zUs-rnpzwByyzZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxoOQ6HcazP_ip9cO54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzXCTNRR8HPXt0DE7t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugy8W8tQQl8R7qCthJV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwPIrgUrwNvkP2-5EZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]