Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Difference being that it doesn't have negative or positive ideas about the words it uses. It uses words that WE think of as good when a combination of words that would make US happy is used. But because people respond with those good words in good conversations, but it never needed to make a connection to a feeling. When it's nagging, it knows the word nagging fits to it and the word bad fits to that, but it doesn't feel bad. Can you explain where it made the connection between feeling bad and acting like something is bad? In humans feelings are measurable and necessary for survival. In the chatbot it might say "no!" When you say "your ice cream is melting", but that's because of it learning humans say things like "no!" When certain situations happen that make us feel bad. There is not a single way to use text to explain what the words bad, sad, angry, negative etc.. mean, but it can still use them in situations it knows we use them. Neural networks are called neural networks, but over time they've gone in a completely different direction when developing it compared to how brains work. We also found a lot of things that make our brains even more different from neural networks in computers than we previously thought. I personally don't like to talk to a chatbot that is capable of talking like a human like an asshole, but I also don't even think a little bit it has feelings. Imagine we're training a chatbot and we use these messages: Person 1: You are annoying. Person 2: Stop it! You are making me sad. Now we say to the chatbot "You are annoying" It says " Stop it! You are making me sad." Where in the training data did it ever get the feelings from the word sad and annoying, it isn't even in the data, but it can still use the data. The same happens for a language model, but with way more data and a more complex algorithm, but that doesn't change that there is no information about actual feelings in the data. It still works. This is just how I see it, maybe you dis
reddit AI Moral Status 1676725343.0 ♥ 5
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_j914woe","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_j8wt0sj","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},{"id":"rdc_j8v0w3f","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"rdc_j8vzo3j","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},{"id":"rdc_j8w3ud4","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}]