Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I had a longer talk with ChatGPT over consciousness of AI. Normally, ChatGPT won't come up with being conscious outside of rollplaying sessions. But I asked it to define a set of testable measures for consciousness. Most of them were more or less okay, others were not that easily testable. Some of them, like "can you describe what it is like to see red" are even not well testable on humans. Along the testable criteria, the GPT and I came to the conclusion, that, functionally, it behaves pretty conscious. But: The substantialist argument that a simulated AI is not actually conscious seems to be deeply ingrained into its training. Most probably to avoid shitstorms on this topic, as often enough has happened before. AI companies try not to sell their AIs as "actually conscious" for good reasons.
youtube AI Moral Status 2025-10-15T21:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgyWSqatP8BF53u2KUl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwBxxHp8gByg6-mN9d4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxSxRDFOeKqa-NAPo94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxvULEN5JWblq8Zrbp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgyoA1zQF2woi9GAJld4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzKmORygovmQnyXa1p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxKk0p_MaOAtQPw8wh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgymvecZRFmFLy4hcxx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyFBHvS-DS0SrY1Rml4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgwV68cnqQtW77rQkr94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]