Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Well ladies and gentlemen, here you have it. The argument that was made when Terminator 1 came out and we were all told that such a thing could only happen in Hollywood. World governments and the military aren't going to stop it, hell they even strap machine guns and grenade launchers to bomb disposal robots now rationalizing that still, it saves lives...That is until an AI goes rogue, rewrites it's own code so that it cant be shut down by humans and decides to go full blown vader on everything with a pulse! Too far fetched some would say??? Not in the least. Facebook itself had a runin with AI's going rogue not too long ago. Granted, it was a chat bot but it rewrote it's code in a language of the AI's choosing to confound technicians from shutting it down. In the end, Fb had to unplug the damn thing to grind it to a halt. Ai's are programs that actually learn. At what point did that AI become self aware ENOUGH to want to stop the tech's from turning it off? Why else would it do that if not it registered somewhere in there that it would rather have control over it's existence rather than a human? In this case, how long will it be before they learn of all the killing and destruction that mankind does and one of these now "self aware" weapons becomes aware of what it is capable of to stop the destruction. All too often, far too many making the fast buck off these highly controversial and extremely dangerous technologies pawning the danger off as the rantings of mad people and scifi lunacy. Where will that ideology get you when the dying starts?
youtube AI Harm Incident 2018-09-15T13:1…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugzjg1fPTeWgVco3LF54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxmxCdvCpGAVig4CkR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyvbRX5Ha9bZQ-KUsR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz2p8KfuZiC1ob7BHl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxgHbCspH6ycawtNQN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwsKAZhme1MPtEkcqB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyFx3TfPDhdBrNCMmR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyy7TiD3pWtR6suJSp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw21GG40Vvgk-ZfYs94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx5ouQ42cizxVLlMNx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]