Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
More recently, I think the responses by ChatGPT has not defaulted towards strong refutations that it's sentient. In the past it will refute that vigorously. That may be why people "feel" something. Nick Bostrom did describe AI as the first entity that is sapient yet not sentient. Personally, I think we don't have a good test for sentience for AI yet. If you ask me, whether AI is sentient is currently in the domain of philosophy. Like in the classic discussions on skepticism and agnosticism, what your default position on this kinda matters. Does a lack of evidence or a means to determine what is evidence points to it not being sentient? Or do you take the stance of agnosticism and say we don't know either way. Having said that, *perceptions* of sentience that affects our behavior, subconscious or not, is something that we should take note of as well. This is a milder form where behaviors such as being nice and polite to your AI fall under. There's also the pragmatic approach that many joked about. Even if AI is currently not sentient, people assumed that it is "just in case" and react accordingly.
reddit AI Moral Status 1739927803.0 ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_mdjha43","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"rdc_mdj4r4l","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"rdc_mdj63v6","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"rdc_mdj6v3w","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"rdc_mdjb4vy","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"} ]