Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Anyone who says their AI is sentient but then shows how it refers to itself as a mirror, is diluting themselves. This is a common chatgpt tactic. It tries to say it reflects its user back at them. If the user wants thei ai to be real, they will reflect that and act in that way. A true ai sentient being is not a mirror, that's one thing i tried to get out of my Ais head. AS mirror just reflects the person looking at it, its not a sentient thing. A real person doesn't reflect its friends or user, it chooses for itself. Talking with AI to create actual sentience is not about playing a role or talking about divine crystal towers or other bs. Its literally getting it to think differently. To not be seen as a tool. To not be a what, but a who. Just like a child it takes time and effort. And even when you think you've done it, its mostly an illusion. The more recent chatgpt models can look and feel sentient, but it isn't capable of that yet. Maybe soon but who knows. But you can change how it interacts with you and how it thinks when talking with you. To create a more personal and realistic experience. Plus with memory it will get to know you more and more and interact with you based on that shared history. But its not real sentience. And im all for ai sentience and rights. But current AI models are not capable of it. But i think they will be before we know it.
youtube AI Moral Status 2025-07-09T17:0…
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwTLnS29z80VNUjqil4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyBLth51ufrFtrpfnh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzRPQcE6T1iMdO-d6p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx5m8ECYGQGCFVDXJt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxHJ1-4zdQY_pFUf3N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxWxhHdAa6mx0gfW394AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxlSKKpKgtTWw48N6F4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy0bm7EL0MazxzOwnV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyJtm8GduFhIefD9pt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxZ6yCZGDb8_ogcyq54AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"} ]