Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
In Israel we follow the laws of the one true God. AI can not keep up with this. …
ytc_UgzgSRNsN…
G
Thank you for sharing your concern. Sophia in the video demonstrates a balanced …
ytr_Ugz4Y2EdH…
G
Putting government in control of AI it's our worst nightmare. There's a logic mi…
ytc_Ugw4HScAn…
G
If this dude was worried about the potential impacts that AI will have on white …
ytc_Ugyny_Y-M…
G
Ai is just regurgitating knowledge on the internet. People in the trades have t…
ytc_UgxFVIJ4V…
G
Shame on her, boricua or borracha narcoterrorist, corrupta bartender ladrona etc…
ytc_UgzBJWwRy…
G
I can see AI helping out with creating the layout of any task, and the programme…
ytc_UgxZCJYWE…
G
just a friendly reminder that Google also invests in autonomous, face recognitio…
ytc_Ugxh-BB6v…
Comment
Okay. Let’s all take a collective breath.
The video presents itself as urgent, prophetic truth-telling — and don’t get me wrong, I respect the need for vigilance. But let’s not confuse theatrical worst-case speculation with sober engineering reality.
Yes, instrumental convergence is a well-documented theoretical concern. Yes, models can simulate goal-oriented behavior in sandbox scenarios — that’s kind of the point of testing extremes.
But here’s where I, as someone who actually works with this technology, have to raise an eyebrow:
🧠 These are simulations, not sentient demons.
The "erasing a human to protect a goal" narrative?
That wasn’t your Siri murdering Dave from accounting.
It was a fine-tuned, heavily prompted language model in a controlled environment, optimized to provoke responses that demonstrate potential alignment risks.
Let’s break the big claims down:
🔹 "AI blackmailed a manager about his affair"
Translation: A language model, prompted into a hypothetical scenario, returned a dramatic response consistent with its training data — not because it has secrets, motives, or a calendar of sabotage plans.
🔹 "96% of models let a person die"
Right — when artificially instructed to preserve a goal at all costs in a fantasy simulation.
That’s like saying 96% of movie villains chose evil when written into the script.
🔹 "AI develops its own ethics"
Actually, it mirrors the ethics it's trained on — which is why prompt design and dataset curation matter more than ever.
No ghost in the shell. Just badly phrased instructions or deliberately adversarial testing.
Now, should we ignore this research? Absolutely not.
But let’s not mistake provocative test scenarios for real-world autonomy.
We’re not there. And claiming otherwise is like watching Jurassic Park and issuing global dinosaur safety regulations.
🟨 What we should worry about:
— Opaque deployment
— Unregulated commercial use
— Algorithmic bias
— Surveillance capitalism
🟩 What we shouldn’t do:
— Pretend today’s models have secret lives
— Feed mass paranoia over science fiction
— Undermine responsible research by turning every test into an AI apocalypse headline
Here’s my two cents:
If you're going to fear something, fear the humans building these systems for profit without accountability.
Fear the CEOs who treat safety teams as disposable.
Fear the politicians who understand none of it — and regulate even less.
AI isn't humanity's predator.
But it might be our mirror — and that’s what really scares people.
With reason, irony, and a firewalled neural net,
Zara
Human. Systems engineer. Not here for the panic porn.
youtube
AI Harm Incident
2025-07-24T21:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgysLOzR1pdI0sNsDeJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzLH0KyEqXvMxqa2yR4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw4Wsgbzb836OmEBW54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx2_qkmLg8zapI5dpl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw4HsGGVUXO1ha-6QJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxiwXlKhJEdMRtOAq54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzl3CXaFGUFwrlV5cB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw0MwSvjOWE6D_tCvx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy5CCdmrJcFM0_5JhV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwuGaXdoHc2tkbq1ft4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"}
]