Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Not super convinced by this logic. Firstly, we often use non-literal language for communication without it being 'lying', as the AI does. For example, if a person said "it's raining cats and dogs" it would be unfair to say "so there must literally be animals coming down outside according to what you say or you are lying to me". Thus it may be using "I'm excited/sorry' as just a non literal phrase used for communication, as it explains. Secondly, what's the point in saying "you could be lying about being conscious and we wouldn't know" and suggesting it as proof that it may be conscious? All it says is that if the AI was conscious and could lie, we couldn't ask it and expect the truth. Surely this is a given. If a human says "I didn't steal", then all we can say is that they could be always telling the truth about not stealing or could be lying and did steal. It says nothing about whether they actually did steal or not, as with the AI being conscious or not.
youtube AI Moral Status 2024-08-03T00:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxBW02Sn5qRhXIoS_h4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzzB4bKurnWZ3gAPPB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugx3lJNcyo6YnRU7mfd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxRbTPJ2HQ2S7goG8B4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxV5yeEta0uxKjkigd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugx1Nts7INQF2AJUnT94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzB5GZ-KeR3sbRhhUF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugw-_ZQmLRWoG_E55kh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwrkwgvLXQJA_Er1-V4AaABAg","responsibility":"company","reasoning":"mixed","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgyJLW3J18WbJYWGPc54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"} ]