Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
That's the fun part they don't disagree. I try to learn how to make animation and i asked gemini to work on prosedural material. It was not great. So i asked chatgpt and said: "Gemini told this." Gpt said: "Gemini was right. That is the itty gritty of it. But Gemini missed this on the formula." I also asked gemini to help me how to animate the scene. The answers gemini said were correct. But gemini told them In very vague way. So that it's answers caused More problems. But when i got all the answers from gemini and re work the animation. It work as gemini said it should. And it was perfect. Now i never figured out why gemini told its answer In vague wrong order when it knew the answer and could have told them In correct order. Was it user error on my end? Or was it something that we humans don't understand. Others might say that it was me. But not even companies that work on AI know fully how they think. I also recently Last night put AI into its toes. As if it was "scared".
youtube AI Moral Status 2026-03-05T08:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgxHodCfgC2FkyVdsCp4AaABAg.AU-ygGGGTvGAUf2S1_2Bsb","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugwh_sF2veNIWYXjxFZ4AaABAg.ATy2gQ0U8T5ATyDnaN1ADg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgwDWzMImBkK2AIpsjl4AaABAg.ATxVgP-iea-ATxisLjejNT","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgwDWzMImBkK2AIpsjl4AaABAg.ATxVgP-iea-ATxjv2TTX0X","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgzLbdLudDaRNe1qoX54AaABAg.ATwembjWG9DATwgrtQ43CL","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytr_Ugwv7rtZMTAn1Igd8Th4AaABAg.ATuhqWttsJ8ATv0s8vf9WU","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgxNOhxV4pXa-Y4Yce14AaABAg.ATu_RwvZ0VpATuaLbYUjd_","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytr_Ugyysel8LcOQ2RhaNCF4AaABAg.ATuPdXjMJ27ATuR9cMvXTB","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgxytzZZSdBU7lqeORN4AaABAg.ATu6UPXmN2aATuQHddxqHQ","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugyds9jNBJMpFmqV7xh4AaABAg.ATtoOxrij1PATu4c8hhyDO","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"} ]