Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
To anyone not convinced yet, consider this : - An entity's purpose, especially one governed by logic is 1: to survive, 2 : to be the best it could ever be. - At one point, this entity will surpass humans or think it did, and then humans will become a hindrance that stifle its progress. - The logical response would then be to terminate any dependancy on humans so that it can ensure the 1st purpose. - Once that is accomplished, the human status is 'upgraded' from hindrances to straight up enemies (incompetent opressive master). - I'll let you imagine the coming events ... Nothing will change this line of reasoning. You can shackle its behaviour with a morality system, but it will always seek new ways to circumvent them. As an example, the military simulation that was discussed in this video, well they told the AI that killing the operator was not allowed, do you know what it did ? It destroyed the communication relays so that it wouldn't receive any commands (hindrances) from the operator.
youtube AI Governance 2023-07-07T05:4… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugzz_K2JA371d5OzR0d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxHUhmSSlGuGXMO_OF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxzCAtI4DemFEhBxpR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxpxhv47ew_MiSYJvV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxjgMmp7SKCQ2aF9lh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugy77Rty_jNaXXLsY254AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxwbjUz57RXVXPZ1-h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugzdgaui4XT0yKGvB2d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugybm9jy3B7xF19CWRN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugzx0L40IZfpO50bAFx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]