Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
By the way, it seems clear to me from reading the transcripts that Bing chat has some sort of internal representation of the beliefs and intentions of the user, as separate from itself. And, it has some kind of value function that operates on what it thinks the \*user\* is thinking and feeling. I think that's why it's so uncanny, and feels so real. It's responding like a person would, because it knows it's talking to a person. That's also why it sometimes appears to be telling lies or being (hamfistedly) manipulative. I think that that's the biggest difference between Bing chat and ChatGPT; I don't think ChatGPT knows or cares about the user.
reddit AI Moral Status 1676604520.0 ♥ 90
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_j8w2l2l","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_j8wyb5n","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_j8vyw7u","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"rdc_j8w3d1o","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_j8v0tl6","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]