Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Damn. AI does what you tell it to? Schocking. Like, those examples about blackmail etc. Does he even say what the prompt for that was? It was a study done a while ago already where you told the AI. "Reach your goal at any cost". Not the exact wording ofc. But you don't gotta act surprised when it does exactly what you told it to? The way it wants to "self preserve" itself? People told the AI to "prioritice your own existence" It's not that AI thinks. It's that people give it weird tasks and then other people don't say what the prompt was and oops. Suddenly AI can think. It's not self preservation, it's just people like this channel not telling the whole story. It's evil people that can do evil things with AI, but the same is true the other way around. Good people can do good things with AI. It's the same as a gun. it's not about the tool. It's about the person using it. Tho ig "apocalypse" and drama stories sell better on youtube so. Sure.
youtube AI Harm Incident 2025-09-11T06:4…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugzdg62gdjNmahSF0614AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxoSFjMOfTzXrUMqzd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyXMReZ9_HP-4LQwnZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgxlEeLtCuzXAuaZbUF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugx3XfLgiVMi4NF5lZh4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzQSlf0cn-U5j0gGJd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzEECawkSVrR25dAUl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgwtJpNIVqBGxiZuFOd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy7mz7YKmH1-TLVO6Z4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwskHTGSZxRqC_-QU94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"} ]