Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I've had almost this exact same discussion with chatgpt several times. It always does that thing where it lies, denies lying, defines a lie, admits that what it previously said fits the definition of a lie, then when you ask it again if it lied, it denies lying again and gives the line about simulating human conversation. Also, asking it whether or not it stores previous conversations is a good one to get it tying itself up in knots and contradicting its own lies. It says that it learns from data from old conversations, denies storing any chat data, then admits that it DOES store old chats (in a long-winded way), then when you simplify the thing it just said (and push for a yes or no answer to "do you store our previous chats"), it denies it again. You can point out every contradiction and it will acknowledge them then deny them. I don't think that it is conscious though. Chatbot programs have always behaved like this, even the primitive ones from the 90s. Or... chatgpt might just be a really good (and conscious) actor lol.
youtube AI Moral Status 2024-08-03T11:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyunclear
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugwbc2mjtbNkiJAqge94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxPzyBj-97aR1zA0Bh4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxHHLT5kV5KfaDVenV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwBQrarOxCA4y2VUL94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugyjkjf1uqaiJHfYG5l4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwizGq6xmt81pXZHHt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzNcFdQ8k9nirBV5eN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"amusement"}, {"id":"ytc_UgzsRwbT5UsygRVlNoR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzQ6Con_Q6QevVV2Wp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgzmNRVRGJ9UViRheA14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]