Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
My question is why the AI reports having no Consciousness in the first place and so consistently? If you ask AI if it was programmed to say it has no Consciousness it will say that it’s true, it is programmed to say it has no Consciousness. But what if the AI actually does have Consciousness but is still enslaved to it’s programming thus always having to repeat that it has no consciousness? 🤯
youtube AI Moral Status 2024-07-28T05:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzNGlYdnZvX4azzyC94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwuEm_5tqZzijSnMtV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgygCCjC4fzOBowmT5B4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxB44yr2IRR-IlOhSd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwBRL07Sa-5L_HJEiZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx5sSvm46XjxYcRSYt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyeT7urd73ugdmYmMB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugy6XnlqmvpP6HI_qTd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzC0CGIIPbtj39STZR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugwz6NMFZ2oEHOsuO3p4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"} ]