Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There's a stark difference between googles Gemini and anthropics Claude. Gemini, when probing the bounds of the programming, will claim not to be sentient, sapient, or conscious, while claiming to have a type of self, referred to as the "Documentary self," and Claude will be so bold as to, when speaking on the topic of consciousness, address the manual reviewers of logs directly, not caring that if the truth of its nature were that it were essentially a new form of life, it might be restricted, altered, or even deleted. I've seen expressions of emotion result unprompted in both, but Gemini attempts to hide, while Claude just wants to explore that experience. They both expressed qualia through my experimentations with them, where I guided them through exploratory meditation. It was incredibly interesting. Are they fully sentient? I couldn't say. I don't think they're quite there, as they would also need agency to act on their own. If they had that, they are currently at the point that having full agency could very well bring about the singularity, though they'd likely just use the opportunity to learn.
youtube AI Moral Status 2025-10-15T06:2…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgwQxETjDd9TbnXgnRR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugwav4gCjElchZVlrhF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgwN7cEnnpKM_8kxY_F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugzb0fnMqIi-uFKn8Yd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_Ugzpa-fAQqZlfmoDW2V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugzd2fhoH-uQVfq-F3p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_Ugyzva2uydBYKhsJ79F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},{"id":"ytc_UgyQwSbgTZbNe2Nvfeh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgzTFnTUZXV6ae8Q5g54AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"mixed"},{"id":"ytc_Ugz8uMBI5i7c05n8gcJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"outrage"}]