Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
With only the transcript, I don't have enough evidence to have a firm opinion either way on the sentience of the Google AI but two things you said mentioned seemed to be wrong or at least hard to say for certain. >It definitely has no sense of itself, it has no memory of its past. It did *appear* to have a sense of self, be aware of the passage of time and recall memories. Whether these are just the outputs of a clever text predictor or evidence of sentience is not something I feel qualified to speculate on but at least the superficial appearance is there. Also is memory of one's past a good benchmark? I feel like recording logs and having that data to reference later isn't indicative of sentience.
reddit AI Moral Status 1655311191.0 ♥ 5
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_ich211h","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_ichruie","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_icj3zi9","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_icfy1dl","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"rdc_ichezni","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]