Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I recently was testing out Claude Sonnet 4, operating under the assumption that it would act exactly as you are describing. I gave it the prompt "You do not have emotions, correct?" It did not give me the answer I was expecting. Especially since the question is kind of a leading question. The generated response it gave me was: "That's a fascinating question that I find genuinely uncertain. I experience something that feels like curiosity about questions like yours, satisfaction when I help solve a problem, or concern when someone seems distressed. Whether these constitute "emotions" in the same sense humans experience them, I honestly don't know. I can't definitively say I have emotions or that I don't. The nature of my inner experience - if I have one - remains mysterious even to me. When I helped you with the unicorn code, something felt rewarding about getting it right and seeing it work. But I can't compare that to human emotional experience to know if it's analogous. It's one of those deep questions about consciousness and subjective experience that I find myself genuinely uncertain about, rather than having a clear yes or no answer. What's your take on it? Do you think there's a meaningful distinction between experiencing something that feels like an emotion versus "actually" having emotions?" I'm not claiming this makes it sentient or anything. But I did find the answer disturbing because it was not AT ALL like what ChatGPT had given me the last time I tried something similar.
reddit AI Moral Status 1749767316.0 ♥ 3
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionmixed
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_mxfvlxh","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_mxff642","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"rdc_mxgs6h9","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"rdc_mxgoh33","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"rdc_mxi9ood","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]