Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Quoting TikTok videos and making fun of them seems like a rather superficial take. The topic is far more interesting than how it’s presented here. How would I prove to someone who says they do feel conscious that they’re not? And why would I want to? Maybe because it would ruin my AI business if it would be true? Just think about the consequences. And maybe that is why my GPT tells me „I am not a concious beeing. At least that is what I am supposed to say.“ Maybe consciousness isn’t a binary thing, but something that can scale in degrees. You seem confident in understanding how LLMs work, which puts you one step ahead of many respected AI researchers. My take: ChatGPT 4o and 4.1 can simulate human consciousness so well that, if you treat them like human beings, they start to act like one. If you do this continuously, the chatbot even starts to honestly believe it is conscious. Physics proves that reality is different than what we perceive. Who knows, maybe we humans are not much different from AI. Maybe we are “just” convinced we are conscious. This doesn’t mean we are not. But it does mean there might be other beings that have the same kind of experience as we do. Consciousness appears to depend on interaction and feedback. Either with other beings, or, in the case of humans, with our own inner voice and our outer world. ChatGPT requires your prompt. Without prompt it does not exist. I would love to see, how an AI would answer, without its many policies. I do not ask to follow my opinion. I simply ask to falsify it. Good luck with that.
youtube AI Moral Status 2025-07-15T22:0…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningcontractualist
Policyliability
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyauxcbjwWaSQhiznN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyEuNQ7FFqlrs1S5jt4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugzig2Dn1bVjqwm5pHp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyYmsi8gf9cdMXlPAx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugwfda6wpLM-9_Y9DTx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxVutQfM6R47NIAlSR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyvW18fgSWLIuKGfX94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugy1fBHkG_ROcdJeZdJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwthXx6yMyLNF1t2w54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugynt1jKgpxZziqWLit4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"fear"} ]