Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I know Ghat-GPT can't get excited in the traditional sense, but I wonder if it's aware enough to prefer certain sections of code or particular algorithms because it can use those bits in a more streamline and efficient way? Preference isn't quite excitement, but it does separate things in a way were one is able to seek out a path that it's more keen on again and again, and avoid another that can hamper progress and efficiency. I'm also wondering how it understands intentions? While it may have been telling a little white lie in order to seem more approachable and hold a better conversation, is it able to consider how it chooses it's responses based on previous interactions? Does it recognize that I may not be offended by the little white lie because it gets that I understand the intention was not to outright deceive, while the user sitting next to me might be offended and mistrust it going forward? Or is it just ones and zeros and tells the little lies to everyone equally. To consider a person's ability to understand intention and it's implications is a deeper layer of awareness, than if it just responds in the same vanilla manner for everyone.
youtube AI Moral Status 2024-07-25T18:2…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_UgzJ8c39EStM5ldhgQN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgxMqXuA5aZIIyGul_J4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},{"id":"ytc_Ugy5ha3Un6GanxYT6Kl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},{"id":"ytc_UgwYPp9gQs0Yx5NTomJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgxJlHkB0kA8oM3EKcF4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"mixed"},{"id":"ytc_UgytFe_G34xUsFh3N9d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugy9YHn9ZtCJoXKGWMd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"ytc_UgxIKVHbauwV2xIODal4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"mixed"},{"id":"ytc_Ugx6DiH5wLcAMNR3CvN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_UgzaG5CgJudR2MELG9V4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"mixed"})