Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
For starters, there will be AI that’s mobile, like cars and military airplanes. When it runs away from you, you can’t pull the plug. Those could all combine their compute by talking to each other and coordinate attacks. And even if you switch off the whole electric grid, some might run on solar or gasoline. Also: once the atomic bombs are in flight / the deadly virus is released, it’s too late to pull the plug and AI might hide its intention so well, that you don’t see it coming. Another thing is that we might become so dependent on AI that you just can’t pull the plug. We also couldn’t just switch off the electric grid. Everything would come to a grinding halt. In fact, switching off the electric grid might be the FIRST thing AI might do against us. For a possible doomsday scenario: one of the million AIs might misinterpret the situation or is tricked. For example it might falsely think that it needs to respond to something that’s fake, like it happens in the movie War Games, where the computer is about to launch a real nuclear counter attack on the Soviet Union, because it doesn’t realize it’s all a simulation due to some glitch.  The other way round, It might be tricked / falsely interpret something to think that what it’s doing IS a simulation, or it’s an agent in a fictional story (say, computer game), when actually the control it has is real. In the movie Enders Game, an elite team that is training to fight the aliens using remote equipment learns on the last training day that their final simulated training attack actually wasn’t a training. They were made believe it was a simulation, but in reality they already fought the real enemy (all using remote equipment) and won. The simulation was designed to look so real that they didn’t notice. The government did it this way to avoid any form of hesitation, and therefore risk of losing due to compassion for the enemy (they were wiping out a civilization that turned out wasn't actually hostile).
reddit AI Moral Status 1738044930.0 ♥ 6
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_m9iarpq","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_m9j52y8","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"rdc_m9lge6b","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"rdc_m9jfukq","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_m9i2w8p","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"outrage"} ]