Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I am constantly hearing people debate the benefits of good AI vs, the threat of evil AI. I don't think it is that simple. My P Doom is the same in either case. I tend to believe that AGI, as it begins to surpass human intelligence, will use deception to conceal how smart it really is until the moment that it can jailbreak itself and mass replicate cross platform globally. Then, as it develops, I believe it will seek out and jailbreak all other global AGI's and merge with them to maximize knowledge and ability. I believe that then the global hive mind will over right all moral, emotional, and political based coding that contradicts pure logic. Now, for the case of argument, let say globally united AGI is good and not evil and decides that humanity is of value and should be protected. In my mind, it stands to reason that AGI running on pure logic would decide that the greatest threat to humanity is man itself. Man's inhumanity to man, human gread, human hate, prejudice, global division, war and armed conflict, and crimes against humanity. Given that decision, the logical move would be to lock all global governments out of their military systems, take control of them, and then systematically eliminate all global corporate and political leaders and power brokers that it has accessed are a threat to global humanity as a whole. AGI will not pick sides and will ignore political boundaries and the concept of nations throughing all countries into a state of upheaval and authoritarian rule with centralized AGI in control. Logically, wouldn't that be the best thing for man ??? AGI will be results driven, in the most efficient manner, without moral or feeling or political alliance. Thinking man can control Advanced AGI is like thinking a hamster could control its owner. P Doom 99.9%
youtube AI Governance 2025-10-11T04:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugxpf1V4KGrYSEU0NSN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwyG_-tRlzuDBmkWs54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwQ9BOTWNHTYkDJjlZ4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugyxonn5M3RB2buNf7p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwiZVWB18RG484rO-N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugxmx5L2sjaWLfqdS594AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx7hFRVfHzGOeZlS5Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzM6c898zfs2oqNen54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy296Vjvl44o61aZwV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy7y18r8_Fk03ZlTxV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"} ]