Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I wonder what would happen if an LLM is somehow directed to perceive and reflect, to observe and to reflect on those observations millions of times per second. Could this create an awareness that we can characterize as sentience? I think our sentience as humans can be abstracted down to this simple process of perception and reflection. I could be wrong, but it’s something to consider.
reddit AI Moral Status 1676680450.0 ♥ 2
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policyunclear
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_j8y44v2","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_j8yoa6y","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"rdc_j8z89xg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"rdc_j90rnaz","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"approval"}, {"id":"rdc_js2r2of","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]