Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think that the way ChatGPT builds up knowledge of the world and the way we do (after infancy) are very similar. I don’t think ChatGPT is conscious because, as another commenter pointed out, it has no volition of its own, no way to speculate on its own existence. I think that it would be a very cool experiment if somebody could find a way to test whether it has a theory of mind. In general we assume that a theory of mind requires consciousness, but maybe it doesn’t. I’m not sure how to test that though, because we can only interact with it through text and it would just answer the way it would expect a human to.
reddit AI Moral Status 1674683724.0 ♥ 11
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_j5x3i8i","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"rdc_j5vy5su","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"rdc_j5walcv","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"rdc_j5w8uo1","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"rdc_j5voinx","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"})