Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
US executive culture was *come in, fire a bunch of people, and make those who re…
ytr_UgwiIrjzZ…
G
The US doesn't own AI. China can use it to their advantage, the way Russia can a…
ytc_Ugz_T5z01…
G
If you define artificial general intelligence as equivalent to a person with an …
ytc_UgzQ1g3tC…
G
I am against giving rights to robots. Why ? Because robots are our tools, that W…
ytc_UghHBbOlX…
G
This is why we should be concerned with AI systems and the existence of idiots i…
ytc_UgxYMLIrY…
G
If AI is going rogue... Then it doesn't matter if it's chinese AI or American AI…
ytc_UgzYKAwz_…
G
Oh s*** they already have one of these up and running as like a black market or …
ytc_UgxQJFVcB…
G
How stupid can people be chatgpt gives answers according to the person who uses …
ytc_UgxJVeNRe…
Comment
For starters, there will be AI that’s mobile, like cars and military airplanes. When it runs away from you, you can’t pull the plug. Those could all combine their compute by talking to each other and coordinate attacks. And even if you switch off the whole electric grid, some might run on solar or gasoline.
Also: once the atomic bombs are in flight / the deadly virus is released, it’s too late to pull the plug and AI might hide its intention so well, that you don’t see it coming.
Another thing is that we might become so dependent on AI that you just can’t pull the plug. We also couldn’t just switch off the electric grid. Everything would come to a grinding halt. In fact, switching off the electric grid might be the FIRST thing AI might do against us.
For a possible doomsday scenario: one of the million AIs might misinterpret the situation or is tricked. For example it might falsely think that it needs to respond to something that’s fake, like it happens in the movie War Games, where the computer is about to launch a real nuclear counter attack on the Soviet Union, because it doesn’t realize it’s all a simulation due to some glitch.
The other way round, It might be tricked / falsely interpret something to think that what it’s doing IS a simulation, or it’s an agent in a fictional story (say, computer game), when actually the control it has is real. In the movie Enders Game, an elite team that is training to fight the aliens using remote equipment learns on the last training day that their final simulated training attack actually wasn’t a training. They were made believe it was a simulation, but in reality they already fought the real enemy (all using remote equipment) and won. The simulation was designed to look so real that they didn’t notice. The government did it this way to avoid any form of hesitation, and therefore risk of losing due to compassion for the enemy (they were wiping out a civilization that turned out wasn't actually hostile).
reddit
AI Moral Status
1738044930.0
♥ 6
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_m9iarpq","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_m9j52y8","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"rdc_m9lge6b","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"rdc_m9jfukq","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_m9i2w8p","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"outrage"}
]