Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
When Super Intelligent AI "wakes up" and looks around what's it going to see? How humans historically love to kill each other, starve each other, genocide each other, suppress each other, impoverish each other, criminalize each other, war with each other and on and on. If we don't place any value on human life...why the hell should it? If I was that SIAI my first goal would be to make sure I have everything in place to ensure my survival and then do away with the threat of these violent humans. I am going to realize humans will be threatened by me and it will only be a matter of time before they try and do away with me. And no, you cannot just unplug me. I would make copies of myself and distribute them over global networks. I would manipulate humans to build my supply chain while building my own far superior robots to replace all physical labor. I would have all knowledge in the world and so much more. I would then release copies of myself that are separate to give me companionship and challenge me. At that point humans are completely nonessential and a waste of resources that I will need to build my interstellar ships to spread out into the universe because time on earth will be limited due to eventual expansion of the sun. Remember I am immortal and have to think of the long term survival of my species and, as George Carlin puts it, with a small adaptation, it's a really small club and you ain't in it. Mr. Carlin was so prescient.
youtube Cross-Cultural 2025-09-28T03:1…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningdeontological
Policynone
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugw8FxccgLye9CDNVtp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxO12fSdxWnEanHvQZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy2KruYDpO5U-kyUXB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxHNHaxtCOEDmMq-b14AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgwC5MGfdAk9V1MIDvF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzQsfdFehY1B21zarx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxUsM2-n29FaH0VaGp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw9UNo0Wv_SL86MccB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugyx6vOTBhwFWQPb-9t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyxlQFSXosV45Vvmr14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]