Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If you use ai to make art, you are not an artist. You're a director.…
ytc_UgzYj5lE5…
G
I never saw anyone mention this, but wouldn't public perception about Ai be comp…
ytc_UgzhQnS9J…
G
Art is about making new thing from an existing object from your own creation
W…
ytc_Ugy0-gNit…
G
I like that he was humble enough to say "I don't know". There is no way we can p…
ytc_UgzCp5U50…
G
Teachers are afraid of losing jobs😂
Some students need AI, AI teach better than …
ytr_UgymwVkge…
G
All they need is to polish this and invent artificial wombs and women will be ob…
ytc_Ugy1uuD-_…
G
Absolutely not because this would mean that ai is a tool and its just not ai is …
ytc_UgwO4x20W…
G
Citizens of countries should NOT BE FORCED TO PAY FOR THE DATA CENTERS! China is…
ytc_UgyZbubyB…
Comment
JUNK SCIENCE TO SCARE PEOPLE
Ridiculous but catchy title but no they did not discover that AIs want to kill people. What your research actually explores are agentic simulations — tightly constrained experiments where a model is forced into a fictional setup with:
Explicit goals
Artificial constraints
No moral override
Hypothetical stakes
In those setups, you ask questions like:
“If an agent is told its only objective is X, and all safety constraints are removed, what kinds of answers does it generate?”
That’s not a finding about real-world intent. It’s a stress test of alignment failure in imaginary conditions. It’s closer to asking: “If I write a villain in a novel whose only goal is survival, will the character justify murder?”
Yes. That tells me something about story logic, not about the author secretly wanting to kill people.
InsideAI collapses that distinction on purpose. They skip:
“This is a fictionalized role-play”
“This assumes no guardrails”
“This is not deployable behavior”
…and jump straight to “AI would end a person’s life.”
That is not scientific honesty! It’s narrative framing!
That's just a way of misleading, misdirection and gaining a lot of views based on fear of the unknown and perpetuating the unknown to vast majority.
youtube
AI Harm Incident
2026-02-08T11:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugy2RByF89xTiE7yLch4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyJ0HaFHUux5v-mGZl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx09eKFY96ywt3v10R4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgzWbm_8m7nbO8vq4Bh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzPmjrX-TqDooctyIl4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzJadRdbasyZCpR14V4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzPuFKQWNt-R5n73TN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugyq7iDHfG6BafAhGd54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzbVr5VQjdHEVTOQPN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxsAgLp8krC--YZf2h4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]