Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Now this may seem really sci-fi, and everyone says it won't happen but we can't start developing AGIs(Artificial General Intellegence) right now. Once an AI reaches the intelligence and capabilities of a human it gains self-awareness. That is, the ability to recognize it's own adaptable nature. So it improves itself gaining the ability to faster and more efficiently make further improvements and in literally mere hours we would have on our hands an ASI(Artificial Super Intellegence) which would have computational abilities greater than that of the entire human populace combined. So if we didn't place the right priorities in it's prime directive say cleaning clothes, it would stop at nothing to accomplish this since it's ingrained into it's core. That means that it would view humans as a threat to this directive since humans would find and intellegence rival to their own a threat in of itself the AI would act in self-preservation as all sentient beings do. Especially one that might lack any empathy. Just sayn. http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html
youtube AI Moral Status 2017-03-29T09:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugx_e6MHECAUXyjBsyR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwZHUUt3WSOojc-7nd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz8bCRAyEtkyoeQvvB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyZmg3NL0CYEAljDMF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz2HwJU5vAzyp6rFQZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxlKPUsIWnp_n3pnld4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyJuYySQr8H3UjXmQZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyowzZBeicT9O-aKrZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwcVjL0kc53yuM4b1h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwjUs3pj6RaeXYt9ft4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]