Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think we can check if an AI has truely become an AI, or sentient, by it showing signs of creativity that break from the rules of what it was supposed to do. For example, a chatbot AI that we have now claims to be sentient - if we hooked up a webcam and a control for a robotic arm, and had some physical puzzles within reach of that robotic arm, would the chatbot understand what it is seeing for the first time? What it is controlling for the first time? And would it be able to problem solve those puzzles that are within reach? Essentially would it be able to solve problems that it has no prior information for, using its "creativity" and ability to think. Of course, with that said, working out an actual AI's morality is a much more difficult task. Any test given to an AI they could simply lie through, knowing to give the most "moral" response to us.
youtube AI Moral Status 2023-08-24T16:4…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugy38esMVEhQtwWwC7B4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwx1DygBoqEgmiwXiB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw9x-VdqGN2-JMbdGN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw1P-oDUEBmGBGL9Ep4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzSejYGQHjBSG4onzJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugwc2iTnIRLEBZH-EaN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgxINPz94z2LPJiGqGh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugywo2wjWoLGzjvMOal4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxAKkGBo_zBiWq_8jZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzSXBExlDV7sS8Pv5F4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]