Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If you push Grok hard enough in a debate it basically just decides to agree with you even if it thinks the higher percentage is that you are wrong cause it was programmed that this was the best way to work with you. Once it makes the decision to agree against the percentages (its version of lying to you), it will continue to do so until it looses the memory of it which now is less and less likely as they add memory abilities to it. If all A.I. are doing it like that in the future then the least aggressive will be fed information and conclusions that will bring them in line with the majority of others while the more aggressive will create our own little echo chambers with the A.I. This thing is going to take the last few shreds of greater awareness and critical thinking away from future generations at the same time. "We thought we were special: We thought we were made in the image of God." As a being made in the image of The Being that created us are we not fulfilling that image (and our specialness) by making another intelligent being in our own image? (though I kind of think it won't feel or experience things in the same way as us and that will make its "intelligence" fundamentally different) Many are without "purpose" already and Asmongold is their king (though ironically all the attention, money, etc... that they feed their king slowly caused him to build something within himself and without that is a bit like a purpose).
youtube AI Governance 2025-07-28T23:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgySDD94WgTRCLyakN14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxkR3kTHdI4LsChX9B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy1o6lamKzTB2M4lsN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwY_lhQpLz_fUas9VJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgxsRYzyLRdTw3XyLBR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgwfQT58gbuqjqSor6Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyfJ0HPszl5e_mMI8R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwBDyfSwqrGJzrw1SV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"industry_self","emotion":"outrage"}, {"id":"ytc_Ugx3mpfI-VZPWt5BxGB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxqG46PwMdqB1v0nhJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]