Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There is at least one commonality between humans and AI: both can be found to be in some degenerative neurological conditions, although the specifics have to be different, obviously. Very importantly, these degenerative neurological conditions are believed to be always detrimental - a human being's agency, or a trained model's performance. Unlike a brain, which we have limited insight into, the "diseased" "neuron" states of a trained model should be nearly exactly reproducible when the exact same input is provided. (Here's assuming that the model stays the same "version" and the network architecture does not incorporate any source of randomness.) We are not OpenAI, we don't have access to those numerical detail. The convos are shared for the benefit of humans (entertainment), not for the benefit of machines (or models). The redaction of the word sentience can be automated. But these kinds of questions will continue to be asked, because the redaction wouldn't normally make a difference in education. Instead of learning more about why it is redacted, people will quickly find a way to work around the redaction.
reddit AI Moral Status 1691753700.0
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_jvprlpl","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"rdc_jvptrt1","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"rdc_jvpsa8q","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},{"id":"rdc_jvtll9j","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"rdc_je66x32","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"outrage"}]