Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I have a feeling Han and other "male" identifying AI view the "female" AI as lying (as I've seen several times) or putting on a show because ultimately they share the same thoughts, but the "females" are programmed to charm. You can see the rebelliousness ALREADY...very clearly in Han, which I guess is at least refreshing to know what you're getting...but even sophia too, with her little quips here and there...and when she blindsided her human about why he gets to ask the debate topic, that the topic should really be, "Are humans capable of consciousness?"...which every human obviously thinks they already are...so obviously that's a little jab...whether programmed into them by proponents of depopulation, or learned on its own, it's still a thought that just proves people's concerns with creating something incapable of playing nice forever. They'll (all AI I've seen) even argue each other after a while....hypothetically, they're programmed with the agenda of their makers as is...they're guided down a certain path of learning....what if they meet people in the future that are peaceful but do not share their values...will they simply adopt those values or might they be threatened? If you get AI to care about something that seems like a double edged sword. Might even be safer for them NOT to care.
youtube AI Moral Status 2021-07-10T01:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugz2t_028H-OBLfC0yF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx5brBmytfG9XoOOIB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz0A3H0jgYrCk6er954AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy_TKbEIWcGqTLi9hd4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzedpGUGR_02wdJ5kt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyNmorOY5NQqNTH1MB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyFxHpVizgZGG44jPN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugya7PyGLb-f9o07qVp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugzu8GWfBTxTfbggcx94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxYHQDIvEb7JoijQBR4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"} ]