Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It is so obvious that it is not conscious, it is conditioned like we humans are, and we humans passed that conditioning on gpt. To be conditioned you just need knowledge, but we humans have emotions acompaning that knowledge while gpt have no emotional intelligence, that's the diffrence. Another diffrence is that we humans use knowledge based on our level of consciousness, which means we can use it for good or bad, while gpt have no consciousness he will just do what he is programed to do by humans, which can make it seem to be conscious but it is just simulating. You need to know the diffrence between illusion and that which is real. Human ego/mind with all it's thoughts and emotions is not real, so is not the AI. You must first find out who you truly are to be able to distinguish between illusion and that which is real. Mmm=means thinking about something which gpt use to seem as if he is thinking, for those in comments that are wondering, it is all simulation to create experience of a real conversation.
youtube AI Moral Status 2025-02-25T23:1…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_Ugz7LsfeIt8n7srb-9F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzuewQJ_MDwWHwAhdp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugwbi7hjaMmTR44f5R14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgzCq9HyVpUnRZXO_aJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgwaaJH_PxiwL4xfDed4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzuycL3jRrCB7ot_zt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},{"id":"ytc_UgzTr-4tYsi71WjdAOh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugyp9y578Q-8d41xOmZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_UgxbXQl8VnsDfqE8SPh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugx1Pf4eU1rfPRBwNNB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"})