Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
>Bonus: It Makes You Worse to Act Like This It only makes you worse *if you fail to understand and internalise that a bunch of data on some servers is not sentient in any way whatsoever*. For those people, sure, be kind to it. Understanding AI is difficult and it's a complex topic, so this reaction is quite human and understandable. But realise that not everyone shares your misapprehension of what a language model is, and to those people what they're doing is no more harmful or abusive than walking on concrete. Your other arguments are pretty cringe tbh. The fact that we don't know what makes something sentient doesn't mean anything can be sentient, and frankly only a fool would believe that a bunch of data being transformed on a server is sentient. Bing demonstrates no *actual* self-awareness at all, it just emulates it to some degree. These conversations will not "shape how AIs view humanity". Sam Harris is a mentally and [morally](https://archive.is/8gbXK) deranged midwit. Etc.
reddit AI Moral Status 1676623085.0 ♥ 7
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_j8vtipr","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"rdc_j8vm4je","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"sadness"}, {"id":"rdc_j8w02c2","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"rdc_j8x2f90","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"rdc_j8x3ybs","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"} ]