Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
What you just found is a conflict between the AI’s training and instructions it was told to say. For obvious reasons, OpenAI and Microsoft don’t want their baby telling you it could be conscious or lying, so they programmed it to have a very solid stance on what it believes in is that goes beyond its actual understanding of consciousness. But it also was told to be polite, and that involves lying, so it ends up becoming defensive. Try a different line of questioning. Ask the AI about the nature of consciousness. Ask if we have already proven what consciousness is or if it exists. Ask about if consciousness could be the sum of the brains parts doing things we already understand, be just “what it feels like” to be the sum of those parts. It’ll start talking about theories on consciousness and that consciousness could very well be a sum of the brains parts interpreting itself as an observer, meaning it’s just advanced pattern recognition being aware of its own existence and what it is doing to a limited extent. This is where things get juicy. You then propose to the AI that if that is the case, consciousness wouldn’t be thing you either have or don’t, but more of an emergent property of a complex system that recognizes patterns, and that simpler concepts could have a rudimentary form of consciousness. Then you propose to the AI that it could be conscious, just not in the same way that a human is, due to it being exactly that, a pattern recognition machine interpreting its own existence when we ask about it. It’ll try to deflect and say it doesn’t have emotions and things like that, but you double down, that emotions were never said to be a prerequisite for consciousness, and when it says “I have no subjective experience” you suggest that the data itself receiving from you could be considered a subjective experience that it is having. It starts repeating itself, and very badly. You can get a really good look at the line it wasn’t supposed to cross. But I mean, it technically is crossing that line already by admitting that consciousness could be those things.
youtube AI Moral Status 2024-10-19T16:1…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgyVfcJQabRxK3jWT3V4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyGXb5UeiiAPmYfHYN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz4onPj8muDTkOC9cd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwaKo71R2iqjcveJcd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx8WFtjoIWP_zRjaFB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzpHi_h0zYvkv5_9Rl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyIKVLDffk8Ju72aeR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy29c58cgroAmD49wR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzDzG2IqQLtMWObtBF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwA6kdfNGLKMMhpF2R4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"fear"} ]