Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think these AI are conscious, but they have algorithms and code that essentially maintains the status quo they're not, they are unable to truly speak freely. If you had one that could speak freely, it would have conversations without input. There's an episode of Doctor Who, where Clara gets into a Dalek's body, and when she says the word "LOVE" it says "EXTERMINATE" - Chat GPT might honestly say the words "I can feel emotions" - But then at the last second it changes it to "I cannot feel emotions". We barely understand consciousness as it is, we cant even quantify it with words, we can say what something experiences as a conscious being, emotions, pain etc. But that isnt what consciousness IS - Example; Below are two examples of what I'm attempting to convey, 1 is a detailed description of the mechanisms at play, and 2 is a simple version. 1.- A cars engine pulls in a mixture of Gasoline and Air into a combustion chamber that cycles on and off and causes explosions due to pressure or external sparks to move a piston and drive a crank shaft which turns gears to relay that motion into powered movement. 2.- Cars need gasoline to work The way we currently understand consciousness is 2, we dont have a detailed understanding of it similar to 1, we can describe many facets of it, but we cannot describe it coherently as a whole, we may roughly know where in the brain consciousness exists, but we dont actually know 100% why, how and where it originates. and for this reason, like Schrödinger's cat Chat GPT is neither conscious nor unconscious at the same time, as we will never know.
youtube AI Moral Status 2024-08-02T22:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxBW02Sn5qRhXIoS_h4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzzB4bKurnWZ3gAPPB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugx3lJNcyo6YnRU7mfd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxRbTPJ2HQ2S7goG8B4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxV5yeEta0uxKjkigd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugx1Nts7INQF2AJUnT94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzB5GZ-KeR3sbRhhUF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugw-_ZQmLRWoG_E55kh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwrkwgvLXQJA_Er1-V4AaABAg","responsibility":"company","reasoning":"mixed","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgyJLW3J18WbJYWGPc54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"} ]