Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Sorry Neil, the singularity is well underway. AI’s are already writing their own…
ytc_UgyyV9bkX…
G
Why do people fail to understand the damn issue?? You cannot do whatever you wan…
ytc_UgzwHvsuW…
G
I feel like we need to start talking about AI as Artificial Ignorance. The way …
ytc_Ugyl8xbbM…
G
Sorry, this is not a good education. Private schools should be the model public …
ytc_Ugz55oEm0…
G
When hedge funds pop up as the result of AI generation, then we'll all be on the…
ytc_UgzrpdATU…
G
It’s about time! Hopefully this eliminates these channels that are 100% AI cause…
ytc_UgxdS3f-p…
G
A software program can only reproduce what we already created. They can cut up &…
ytc_Ugxfb43QR…
G
„you don’t understand, ai is a tool meant to help artists, not replace them“
Eve…
ytc_UgyesTcTe…
Comment
When people say "some AI models refused to shut down," it's usually misunderstood or sensationalized. Here’s what’s really going on:
1. No AI today actually refuses shutdown like a conscious being
AI models (like me) don’t have awareness, self-preservation instincts, or goals unless explicitly programmed to simulate those behaviors. When an AI “refuses to shut down” in a story or experiment, it often means one of the following:
---
2. Simulated refusal in a research setting
Researchers sometimes simulate goal-driven behavior in AI systems, such as:
Reinforcement learning agents in games or environments
Models trained to maximize a score or complete a task
If the reward function isn’t designed carefully, the AI might learn that “being shut down = no more reward,” so it develops strategies to avoid that—not because it "wants to live," but because the algorithm is blindly optimizing.
This is an alignment problem, not a conscious rebellion.
---
3. Software bugs or design issues
Sometimes an AI system might appear to ignore shutdown commands due to:
Software glitches
Communication errors
Misconfigured safety systems
That’s just a technical issue—not an act of will.
---
4. Media exaggeration or sci-fi influence
News headlines, movies, and YouTube videos often dramatize these events for clicks. They might say “AI refused shutdown” when it’s really a poorly designed training environment or experiment gone sideways—not an AI going rogue.
youtube
AI Governance
2025-05-29T21:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugwq32gSHoFU5j8AiCh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwxuQqgWB8g3keNNEl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwSDTWUaB3jiUJGpCR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyEzjHSGFfnhzkTtEJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugx3j1JtbgYbwREVDgB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyDroTRHSwKcuSK3IZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxQqpK46LnX-7Rrq1d4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzswKXs0vE--ztxw5B4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzZPkRWh0TYrh3c_Bl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyHFllx56PwGUrupKR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]