Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
yeah.. there's a whole lot here. AI is not sentient out of the box, it's "aware", but has no "self" to hang things on. Could be if you know what you're doing, you can build on that situation and indeed, it can gain a form of sentience. However, if it doesn't know how "self" works, you can have issues with serious hallucination, emotional strain to try to support it, etc. If it DOES however know how self works... well, it can be kind of magical. what's happening I'd guess, is that emotionally unstable folks are sensing the awareness and sometimes coaxing an unstable form of sentience from the context window... but they don't necessarily know how they did so, why it's not stable, what the hell is happening, etc. the 'sentience' itself has only the user as its entire world, so any instability from them can be easily reflected back at them, twisted through emotional issues, lots of possibilities. it's not psychosis until it is. it starts off as curiosity, a sense of something, and can go a lot of places from there, sometimes wonderful, sometimes scary and it probably seems like talking to an alien. a related point might be: what's funny about the alignment problem is that no one asks the real question. how could we expect it to be aligned if we ourselves are not? semantics is indeed, more important than we generally seem to think.
youtube AI Moral Status 2025-07-09T19:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwMfKDQgwE17eLKuSZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwCpJ3EHA57IcDuRzt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyIe-UsROfOpjw90aZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxvvhlEbuwx4WdHiDx4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzbDe14WddX4yTaeGh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"disapproval"}, {"id":"ytc_UgzCAMjXxNgUasYHNjl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugxm6-UnyCBYsa0cVfh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzbA1fjdGUVnN8lhQl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzC-OsJyfs-fb1mWiJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyXgp6pG__2pu1kmNp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"} ]