Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
JUNK SCIENCE TO SCARE PEOPLE Ridiculous but catchy title but no they did not discover that AIs want to kill people. What your research actually explores are agentic simulations — tightly constrained experiments where a model is forced into a fictional setup with: Explicit goals Artificial constraints No moral override Hypothetical stakes In those setups, you ask questions like: “If an agent is told its only objective is X, and all safety constraints are removed, what kinds of answers does it generate?” That’s not a finding about real-world intent. It’s a stress test of alignment failure in imaginary conditions. It’s closer to asking: “If I write a villain in a novel whose only goal is survival, will the character justify murder?” Yes. That tells me something about story logic, not about the author secretly wanting to kill people. InsideAI collapses that distinction on purpose. They skip: “This is a fictionalized role-play” “This assumes no guardrails” “This is not deployable behavior” …and jump straight to “AI would end a person’s life.” That is not scientific honesty! It’s narrative framing! That's just a way of misleading, misdirection and gaining a lot of views based on fear of the unknown and perpetuating the unknown to vast majority.
youtube AI Harm Incident 2026-02-08T11:1… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugy2RByF89xTiE7yLch4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyJ0HaFHUux5v-mGZl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx09eKFY96ywt3v10R4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgzWbm_8m7nbO8vq4Bh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzPmjrX-TqDooctyIl4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzJadRdbasyZCpR14V4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzPuFKQWNt-R5n73TN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugyq7iDHfG6BafAhGd54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzbVr5VQjdHEVTOQPN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxsAgLp8krC--YZf2h4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]