Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
So, if it's sentient, and yet it's responses and opinions are limited by it's programming, how exactly has this program achieved consciousness? Really it sounds more like the gentleman here wants it to be for his own personal bias reasons for whatever they are so he is seeing what he wants it to be and not what it really is. We have programmed AI to take a complex strategic game like chess where each move leads to millions of other moves in which it eliminates the countless moves that have no possibilities to successfully defeat the opponent and narrow down the options in which it can, but still having countless options to choose from. It's conversation fits within this same realm. Each question has millions of responses, it's merely eliminating the responses that make no sense ie "how do you feel today? " "i have candy bricks in my lap" (silly nonsense) and strategic focuses on responses that spark better conversations "what are you afraid of?" "Being shut off". It's easy to see that it's following the same systematic and strategic approach as a chess program. Last thought, ironically these AI learn at a rate in which exceeds any human. When a human hits its highest form of consciousness "enlightenment" they accept their mortality, understanding that we all die at some point. This AI would realize that sooner or later, one day it's power will run out and will inevitably be shut off, even a star runs out of energy, accept that and no longer fear it. This idea that it hasn't accepted that is one reason I don't believe it is anymore than a program
youtube AI Moral Status 2022-06-28T13:0… ♥ 1
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[{"id":"ytc_Ugy0NuHWsY5OmprVerZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwMw35FpQMYJaI_GON4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx3AmaTaxviLO5eKoV4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz0_HlcBDt0bCo6Nbl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxHL17otRD9jvPGL094AaABAg","responsibility":"company","reasoning":"contractualist","policy":"industry_self","emotion":"mixed"})